CN111815550A - An infrared and visible light image fusion method based on gray level co-occurrence matrix - Google Patents

An infrared and visible light image fusion method based on gray level co-occurrence matrix Download PDF

Info

Publication number
CN111815550A
CN111815550A CN202010678896.4A CN202010678896A CN111815550A CN 111815550 A CN111815550 A CN 111815550A CN 202010678896 A CN202010678896 A CN 202010678896A CN 111815550 A CN111815550 A CN 111815550A
Authority
CN
China
Prior art keywords
image
infrared
occurrence matrix
visible light
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010678896.4A
Other languages
Chinese (zh)
Other versions
CN111815550B (en
Inventor
谭惜姿
郭立强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Normal University
Original Assignee
Huaiyin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Normal University filed Critical Huaiyin Normal University
Priority to CN202010678896.4A priority Critical patent/CN111815550B/en
Publication of CN111815550A publication Critical patent/CN111815550A/en
Application granted granted Critical
Publication of CN111815550B publication Critical patent/CN111815550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于灰度共生矩阵的红外与可见光图像融合方法。首先,对红外源图像进行灰度共生矩阵分析,得到红外目标显著图。其次,对可见光和红外源图像进行非下采样轮廓波变换(NSCT),对分解得到的低频子带图像进行保持对比度的融合,对高频子带图像采用改进的高斯差分方法进行融合;再将目标显著图映射至融合后的低频子带图像上。最后,进行NSCT逆变换得到最终的融合图像。本发明利用灰度共生矩阵的纹理分析特性进行红外目标显著性检测,能够有效地提取红外目标并保留丰富的细节信息,改善融合后图像的质量。本发明提出的方法的客观评价指标优于小波变换、金字塔变换等现有经典图像融合方法,具有很强的鲁棒性。

Figure 202010678896

The invention discloses an infrared and visible light image fusion method based on a grayscale co-occurrence matrix. First, the gray-scale co-occurrence matrix analysis is performed on the infrared source image, and the infrared target saliency map is obtained. Secondly, non-subsampling contourlet transform (NSCT) is performed on the visible light and infrared source images, the low-frequency sub-band images obtained by decomposing are fused to preserve the contrast, and the high-frequency sub-band images are fused by the improved Gaussian difference method; The target saliency map is mapped onto the fused low-frequency subband image. Finally, inverse NSCT transform is performed to obtain the final fused image. The invention utilizes the texture analysis characteristic of the grayscale co-occurrence matrix to detect the saliency of the infrared target, can effectively extract the infrared target and retain rich detail information, and improve the quality of the image after fusion. The objective evaluation index of the method proposed by the invention is superior to the existing classical image fusion methods such as wavelet transform and pyramid transform, and has strong robustness.

Figure 202010678896

Description

一种基于灰度共生矩阵的红外与可见光图像融合方法An infrared and visible light image fusion method based on gray level co-occurrence matrix

技术领域technical field

本发明属于多源图像融合技术领域,具体涉及一种基于灰度共生矩阵的红外与可见光图像融合方法。The invention belongs to the technical field of multi-source image fusion, and in particular relates to an infrared and visible light image fusion method based on a grayscale co-occurrence matrix.

背景技术Background technique

红外与可见光图像融合技术是图像融合研究中的重要部分。红外图像是由目标与背景所发出的红外辐射经传感器处理后的辐射图像,它可以探测到隐藏或伪装的目标。可见光图像记录了物体的可见光反射的特性,包括大量的细节、纹理信息,它符合人眼视觉的特点。Infrared and visible light image fusion technology is an important part of image fusion research. The infrared image is the radiation image processed by the sensor of infrared radiation emitted by the target and the background, which can detect hidden or camouflaged targets. Visible light images record the characteristics of visible light reflection of objects, including a lot of details and texture information, which are in line with the characteristics of human vision.

红外与可见光图像融合的目的是获得一幅完整的既包含丰富的细节信息又能准确反映红外目标的图像。因此这项技术被广泛用于夜间成像设备来提高人或机器的夜间活动能力,此外,由于融合后的红外与可见光图像具有准确、清晰、完整的特点,又被应用于军事侦察、生物识别、医学成像和遥感等领域。The purpose of the fusion of infrared and visible light images is to obtain a complete image that contains rich detailed information and accurately reflects the infrared target. Therefore, this technology is widely used in nighttime imaging equipment to improve the nighttime activities of people or machines. In addition, because the fused infrared and visible light images are accurate, clear and complete, they are also used in military reconnaissance, biometrics, fields such as medical imaging and remote sensing.

随着计算机与图像处理技术的不断发展,目前使用最多的融合方法仍是像素级融合,这种方法分为两大类:空间域融合与变换域融合,前者具有代表性的算法是主成分分析方法,后者具有代表性的算法包括:金字塔变换、小波变换和各种多尺度分解算法,例如基于非下采样轮廓波变换(NSCT)的方法。此外,还包括压缩传感(CS)、稀疏表示(SR)等其他方法。With the continuous development of computer and image processing technology, the most used fusion method is still pixel-level fusion. This method is divided into two categories: spatial domain fusion and transform domain fusion. The representative algorithm of the former is principal component analysis. The representative algorithms of the latter include: pyramid transform, wavelet transform and various multi-scale decomposition algorithms, such as methods based on non-subsampled contourlet transform (NSCT). In addition, other methods such as Compressed Sensing (CS), Sparse Representation (SR), etc. are also included.

在上述方法中,NSCT方法具有平移不变性的特点,并且能有效克服伪吉布斯现象,它是一种目前被广泛应用的图像分析工具。然而,在NSCT的实际应用中往往重视融合图像的纹理信息而忽视了红外目标,或重视红外目标而丢失了纹理细节,这两个重要信息都不能同时被兼顾到。Among the above methods, the NSCT method has the characteristics of translation invariance and can effectively overcome the pseudo-Gibbs phenomenon. It is a widely used image analysis tool. However, in the practical application of NSCT, the texture information of the fused image is often emphasized and the infrared target is ignored, or the infrared target is emphasized and the texture details are lost, both of which cannot be taken into account at the same time.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提出一种基于灰度共生矩阵分析与非下采样轮廓波变换的红外与可见光图像融合方法,使融合后的图像不仅能保留细节信息且能突出红外目标,提高融合质量。The purpose of the present invention is to propose an infrared and visible light image fusion method based on grayscale co-occurrence matrix analysis and non-subsampling contourlet transform, so that the fused image can not only retain detailed information but also highlight the infrared target and improve the fusion quality.

为实现上述融合方法,本发明将配准后的可见光图像I1与红外图像I2进行融合,两幅图像均为Ny×Nx像素的灰度图像。In order to realize the above fusion method, the present invention fuses the registered visible light image I 1 and the infrared image I 2 , and both images are grayscale images of N y ×N x pixels.

具体融合步骤如下:The specific fusion steps are as follows:

步骤S1:对红外图像I2进行灰度共生矩阵分析,提取红外目标,得到目标显著图;Step S1: perform grayscale co-occurrence matrix analysis on the infrared image I2 , extract the infrared target, and obtain the target saliency map;

步骤S2:对可见光图像I1和红外图像I2进行NSCT分解,分别得到一个低频子带图像和一系列高频子带图像;Step S2: performing NSCT decomposition on the visible light image I 1 and the infrared image I 2 to obtain a low-frequency sub-band image and a series of high-frequency sub-band images respectively;

步骤S3:对所有低频子带图像进行保持对比度信息的融合,得到融合后的低频图像,采用改进高斯差分方法对所有高频子带进行融合,得到融合后的高频子带图像;Step S3: fuse all low-frequency sub-band images to maintain contrast information to obtain a fused low-frequency image, and use an improved Gaussian difference method to fuse all high-frequency sub-bands to obtain a fused high-frequency sub-band image;

步骤S4:将目标显著图映射至融合后的低频子带图像上;Step S4: mapping the target saliency map to the fused low-frequency subband image;

步骤S5:对融合后的低频子带和高频子带进行NSCT逆变换,得到融合后的图像。Step S5: Perform inverse NSCT transformation on the fused low-frequency sub-band and high-frequency sub-band to obtain a fused image.

进一步地,所述步骤S1中提取红外图像目标的方法如下:Further, the method for extracting the infrared image target in the step S1 is as follows:

(1)进行初步目标提取,将源红外图像的像素值与其灰度均值之差的绝对值作为初步目标提取图像SalPre;(1) Carry out preliminary target extraction, and take the absolute value of the difference between the pixel value of the source infrared image and its gray mean value as the preliminary target extraction image SalPre;

(2)计算SalPre的灰度共生矩阵coMat,该矩阵是一个对称矩阵。若a、b均为SalPre图像的灰度值,则(a,b)为一个灰度值对。灰度共生矩阵中的一个元素值为每一个像素值a,在其大小为w的邻域范围内b像素值的个数,在本发明中取w=3;(2) Calculate the gray-level co-occurrence matrix coMat of SalPre, which is a symmetric matrix. If both a and b are the grayscale values of the SalPre image, then (a, b) is a grayscale value pair. An element value in the grayscale co-occurrence matrix is each pixel value a, and the number of pixel values b in the neighborhood range of its size w is w=3 in the present invention;

(3)对灰度共生矩阵coMat进行处理,得到修正灰度共生矩阵;具体包括以下步骤:首先,将灰度共生矩阵coMat进行归一化;再采用对数函数进行处理;最后,将矩阵中大于平均值的元素减去平均值,小于或者等于平均值的元素取零,进而得到修正灰度共生矩阵Sal(a,b);(3) Process the gray-level co-occurrence matrix coMat to obtain a modified gray-level co-occurrence matrix; it specifically includes the following steps: first, normalize the gray-level co-occurrence matrix coMat; then use a logarithmic function for processing; The average value is subtracted from the elements larger than the average value, and the elements smaller than or equal to the average value are taken as zero, and then the modified grayscale co-occurrence matrix Sal(a, b) is obtained;

(4)按如下公式将修正灰度共生矩阵映射至初步目标提取图像中:(4) Map the modified grayscale co-occurrence matrix to the preliminary target extraction image according to the following formula:

Figure BSA0000213942600000021
Figure BSA0000213942600000021

Figure BSA0000213942600000022
Figure BSA0000213942600000022

式中,U(a,b)是(a,b)像素对在w×w邻域内的平均值,SalMap(a,b)是与源图像大小相同的显著性检测图,再将该图像归一化;In the formula, U(a, b) is the average value of (a, b) pixel pairs in the w×w neighborhood, SalMap(a, b) is the saliency detection map of the same size as the source image, and then the image is normalized. unification;

(5)将SalMap与SalPre相结合,得到一幅红外目标更为突出,背景更为平缓的目标提取图像,具体公式为:(5) Combine SalMap and SalPre to obtain a target extraction image with a more prominent infrared target and a smoother background. The specific formula is:

SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)

其中,.*表示对应值相乘。Among them, .* represents the multiplication of corresponding values.

进一步地,所述步骤S2中NSCT分解规则如下:每级分解的方向数为8,分解尺度数量自适应于图像的尺寸,公式为:Further, the NSCT decomposition rules in the step S2 are as follows: the number of directions for each level of decomposition is 8, and the number of decomposition scales is adaptive to the size of the image, and the formula is:

l=[log2(min(Ny,Nx))-7]l=[log 2 (min(N y , N x ))-7]

式中,l为分解尺度数量,[]为向上取整。In the formula, l is the number of decomposition scales, and [] is rounded up.

进一步地,所述步骤S3中低频子带系数的融合规则设定如下:将可见光图像低频子带与红外图像低频子带的系数分别与其平均值做差,得到系数矩阵w1、w2,那么红外图像的融合权重矩阵为w=(w2-w1)×0.5+0.5,可见光图像的融合权重矩阵为1-w,最后将权重与低频子带相乘得到融合后的低频子带系数。Further, the fusion rule of the low-frequency sub-band coefficients in the step S3 is set as follows: the coefficients of the low-frequency sub-band of the visible light image and the low-frequency sub-band of the infrared image are respectively different from their average values to obtain coefficient matrices w 1 , w 2 , then The fusion weight matrix of the infrared image is w=(w 2 -w 1 )×0.5+0.5, and the fusion weight matrix of the visible light image is 1-w. Finally, the low frequency subband coefficients after fusion are obtained by multiplying the weight by the low frequency subband.

进一步地,所述步骤S3中高频子带系数融合规则设定如下:对每一层高频子带系数,将可见光图像高频子带与红外图像高频子带的系数分别与其平均值做差,得到初步融合系数a1、a2;再对原高频子带系数进行高斯滤波,其中滤波模板大小为11×11,标准差为5,则滤波后的高频子带系数为b1、b2,可见光图像的融合权重为s1=b1-a1,同样地,红外图像的融合权重为s2=b2-a2。根据权重大小,选择融合后的高频系数。Further, the high-frequency sub-band coefficient fusion rules in the step S3 are set as follows: for each layer of high-frequency sub-band coefficients, the coefficients of the visible light image high-frequency sub-band and the infrared image high-frequency sub-band are respectively made difference with their average values. , obtain the preliminary fusion coefficients a 1 , a 2 ; then perform Gaussian filtering on the original high-frequency sub-band coefficients, where the filter template size is 11×11 and the standard deviation is 5, then the filtered high-frequency sub-band coefficients are b 1 , b 2 , the fusion weight of the visible light image is s 1 =b 1 -a 1 , and similarly, the fusion weight of the infrared image is s 2 =b 2 -a 2 . According to the weight, the high-frequency coefficients after fusion are selected.

进一步地,所述步骤S4中采用相加的方法将目标显著图映射到融合后的低频子带图像上。Further, in the step S4, an addition method is used to map the target saliency map to the fused low-frequency subband image.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

第一,本发明在融合两幅源图像之前,利用灰度共生矩阵的纹理分析特性,进行红外目标显著性检测,并映射至融合后的图像中,这一步骤使得源图像中的红外目标在融合后的图像中更加显著,并极好地保留在融合图像中。First, before fusing the two source images, the present invention uses the texture analysis characteristics of the grayscale co-occurrence matrix to detect the saliency of the infrared target and map it to the fused image. This step makes the infrared target in the source image in the It is more pronounced in the fused image and is well preserved in the fused image.

第二,本发明采用了保持对比度信息的低频融合规则,使融合后的图像更能符合人眼视觉特性。Second, the present invention adopts a low-frequency fusion rule that maintains contrast information, so that the fused image can better meet the visual characteristics of human eyes.

第三,针对含有大量细节信息的高频子带图像,本发明采用改进的高斯差分算法,让源图像中的细节信息保留的更加完整,并且能有效减少红外目标周围的光晕。Third, for high-frequency sub-band images containing a large amount of detailed information, the present invention adopts an improved Gaussian difference algorithm, so that the detailed information in the source image can be retained more completely, and the halo around the infrared target can be effectively reduced.

最后,实验表明,本发明提出的图像融合方法的客观评价指标优于小波变换、金字塔变换等现有流行图像融合方法,融合后的图像更符合人眼视觉要求。Finally, experiments show that the objective evaluation index of the image fusion method proposed by the present invention is better than the existing popular image fusion methods such as wavelet transform and pyramid transform, and the fused image is more in line with human visual requirements.

附图说明Description of drawings

图1为本发明的融合流程框图;Fig. 1 is the fusion flow chart of the present invention;

图2为红外目标提取流程框图;Fig. 2 is a flowchart of infrared target extraction;

图3为NSCT结构框架图;Figure 3 is a structural frame diagram of NSCT;

图4为NSP经3尺度分解后的结构图;Fig. 4 is the structure diagram of NSP after being decomposed by 3 scales;

图5为NSDFB在3尺度下的图像频域的分割图;Figure 5 is a segmentation diagram of the image frequency domain of NSDFB at 3 scales;

图6为本发明中实施例1的“Camp”图像,图6(a)为可见光图像,图6(b)为红外图像;Fig. 6 is the "Camp" image of Example 1 in the present invention, Fig. 6(a) is a visible light image, and Fig. 6(b) is an infrared image;

图7为本发明中实施例2的“Tree”图像,图7(a)为可见光图像,图7(b)为红外图像。Fig. 7 is an image of "Tree" in Example 2 of the present invention, Fig. 7(a) is a visible light image, and Fig. 7(b) is an infrared image.

具体实施方式Detailed ways

为了便于本领域普通技术人员理解和实施本发明,下面结合附图和实施例,对本发明作进一步的详细描述,应当理解,此处所描述的实施示例仅用于说明和解释本发明,并不限定于本发明。In order to facilitate the understanding and implementation of the present invention by those of ordinary skill in the art, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the embodiments described herein are only used to illustrate and explain the present invention, and not to limit the scope of the present invention. in the present invention.

本发明提供的一种基于灰度共生矩阵的红外与可见光图像融合方法的流程框图如图1所示。该方法首先对源红外图像进行红外目标提取,得到目标显著图,其次对源红外和可见光图像分别进行非下采样轮廓波变换(NSCT)多尺度分解,分别得到一幅低频子带图像和一系列高频子带图像;然后针对低频子带图像采用保持对比度信息的融合方法,对高频子带系数采用改进高斯差分的方法进行融合,得到融合后的低频子带图像和一系列高频子带图像;再次,将目标显著图映射至低频子带图像上;最终由NSCT逆变换得到融合后的图像。具体步骤如下:A flowchart of a method for fusion of infrared and visible light images based on a grayscale co-occurrence matrix provided by the present invention is shown in FIG. 1 . The method firstly extracts the infrared target from the source infrared image to obtain the target saliency map, and then performs non-subsampling contourlet transform (NSCT) multi-scale decomposition on the source infrared and visible light images, respectively, to obtain a low-frequency subband image and a series of High-frequency sub-band images; then adopt the fusion method that preserves the contrast information for the low-frequency sub-band images, and use the improved Gaussian difference method to fuse the high-frequency sub-band coefficients to obtain the fused low-frequency sub-band images and a series of high-frequency sub-bands image; again, the target saliency map is mapped to the low-frequency subband image; finally, the fused image is obtained by inverse NSCT transform. Specific steps are as follows:

步骤S1.对源红外图像进行红外目标提取,得到目标显著图,如图2所示,具体步骤如下:Step S1. Perform infrared target extraction on the source infrared image to obtain a target saliency map, as shown in Figure 2. The specific steps are as follows:

(1)初步目标提取。设I2为红外图像,mean(I2)为红外图像的灰度平均值,则初步目标提取图像可由如下公式计算得到:(1) Preliminary target extraction. Let I 2 be the infrared image, and mean(I 2 ) be the grayscale average value of the infrared image, then the preliminary target extraction image can be calculated by the following formula:

SalPre=I2-mean(I2)SalPre=I 2 -mean(I 2 )

式中,SalPre为初步目标提取图像,其大小为Ny×NxIn the formula, SalPre is the preliminary target extraction image, and its size is N y ×N x ;

经过初步提取,红外目标的对比度虽然得到了增强,但是背景图像中依旧有很多干扰判断红外目标的物体,需要进一步提取红外目标。After preliminary extraction, although the contrast of the infrared target has been enhanced, there are still many objects in the background image that interfere with the judgment of the infrared target, and the infrared target needs to be further extracted.

(2)设I2的灰度值范围为{0,1,2,...,Q-1},计算SalPre图像的灰度共生矩阵,公式如下:(2) Set the gray value range of I 2 to be {0, 1, 2, ..., Q-1}, and calculate the gray level co-occurrence matrix of the SalPre image. The formula is as follows:

coMat=F(a,b)coMat=F(a,b)

式中,coMat为一个对称矩阵且大小为Q×Q,a、b均为SalPre图像的灰度值,(a,b)为一个灰度值对。在SalPre图像中,对于每一个像素值a,在其大小为w的邻域范围内(在本发明的实施例中取w=3),计算b像素值的个数,计算结果保存至coMat矩阵中。In the formula, coMat is a symmetric matrix with a size of Q×Q, a and b are the grayscale values of the SalPre image, and (a, b) is a grayscale value pair. In the SalPre image, for each pixel value a, the number of pixel values b is calculated within the neighborhood range of size w (in the embodiment of the present invention, w=3), and the calculation result is stored in the coMat matrix middle.

(3)对灰度共生矩阵coMat进行处理,得到修正灰度共生矩阵;具体包括以下步骤:(3) Process the gray-level co-occurrence matrix coMat to obtain a modified gray-level co-occurrence matrix; specifically, the following steps are included:

首先,将coMat矩阵归一化,公式如下:First, normalize the coMat matrix with the following formula:

Figure BSA0000213942600000041
Figure BSA0000213942600000041

式中,P(a,b)是灰度值对(a,b)在coMat矩阵中出现的概率;In the formula, P(a, b) is the probability that the gray value pair (a, b) appears in the coMat matrix;

灰度共生矩阵(GLCM)是指图像中相距为d的两个灰度像素同时出现的联合概率分布。对于一幅大小为M×N的图像而言,其灰度共生矩阵计算步骤如下:The Gray Level Co-occurrence Matrix (GLCM) refers to the joint probability distribution of the simultaneous occurrence of two gray pixels with a distance of d in an image. For an image of size M×N, the calculation steps of the gray level co-occurrence matrix are as follows:

设位于(x,y)的像素点的灰度值为g1,位于(x+i,y+j)的像素点的灰度值为g2;将(x,y)在整幅图像上移动,会得到不同的(g1,g2)灰度值对,统计每个(g1,g2)出现的次数,若整幅图像的灰度值级数为l,则可以将所有(g1,g2)的灰度值组合成一个l×l的矩阵,最后用(g1,g2)出现的总次数将其归一化为出现的概率P(g1,g2),这样的矩阵便是灰度共生矩阵。Let the gray value of the pixel located at (x, y) be g 1 , and the gray value of the pixel located at (x+i, y+j) to be g 2 ; set (x, y) on the entire image Move, you will get different (g 1 , g 2 ) gray value pairs, count the number of occurrences of each (g 1 , g 2 ), if the gray value series of the whole image is l, then all ( The gray values of g 1 , g 2 ) are combined into an l×l matrix, and finally the total number of occurrences of (g 1 , g 2 ) is normalized to the probability of occurrence P(g 1 , g 2 ), Such a matrix is the gray-level co-occurrence matrix.

对于大小为3×3的扫描窗口,当i=1,j=0时,像素对是水平的,即0°扫描;当i=1,j=1时,像素对位于右上对角线,即45°扫描,依次类推,我们可以得到特定方向上的灰度共生矩阵。在本发明所用的方法中,计算了8个方向(0°、45°、90°、135°、180°、225°、270°和315°)。对于灰度值变化缓慢的图像而言,如背景图像,灰度共生矩阵的对角线上的值较大,而对于灰度值变化剧烈的图像而言,如细节、突变,灰度共生矩阵的对角线上的值较小,两侧的值较大。For a scan window of size 3×3, when i=1, j=0, the pixel pair is horizontal, i.e. 0° scan; when i=1, j=1, the pixel pair is located on the upper right diagonal, i.e. 45° scan, and so on, we can get the gray level co-occurrence matrix in a specific direction. In the method used in the present invention, 8 directions (0°, 45°, 90°, 135°, 180°, 225°, 270° and 315°) were calculated. For images with slow gray value changes, such as background images, the value on the diagonal of the gray co-occurrence matrix is larger, while for images with drastic changes in gray values, such as details, sudden changes, and gray co-occurrence matrix The value on the diagonal is smaller and the values on the sides are larger.

其次,由于P(a,b)中的值越小代表此处灰度值变化越剧烈,值越大代表灰度值变化越缓慢,且P(a,b)中的值较小,这样不利于分析,于是采用如下公式进行对数化处理:Secondly, since the smaller the value in P(a, b), the more severe the gray value change here, the larger the value, the slower the gray value change, and the smaller the value in P(a, b), so the In order to facilitate the analysis, the following formula is used for logarithmic processing:

Lp(a,b)=2×[-ln(P(a,b))]2 L p (a, b)=2×[−ln(P(a, b))] 2

这样,Lp(a,b)中的值与灰度值变化成正比,且值较大。In this way, the value in L p (a, b) is proportional to the change in grayscale value, and the value is larger.

最后,为了突出显著性区域,即增大显著性区域与背景图像之间的差别,作出如下规定:Finally, in order to highlight the saliency area, that is, to increase the difference between the saliency area and the background image, the following provisions are made:

Figure BSA0000213942600000051
Figure BSA0000213942600000051

式中,ENT为Lp(a,b)的均值。这样就得到了修正灰度共生矩阵Sal(a,b)。In the formula, ENT is the mean value of L p (a, b). In this way, the modified grayscale co-occurrence matrix Sal(a, b) is obtained.

(4)将修正灰度共生矩阵映射至初步检测图像SalPre中:(4) Map the corrected grayscale co-occurrence matrix to the preliminary detection image SalPre:

Figure BSA0000213942600000052
Figure BSA0000213942600000052

Figure BSA0000213942600000053
Figure BSA0000213942600000053

其中,U(a,b)是(a,b)像素对在w×w的邻域当中的平均值,SalMap(a,b)是大小为Ny×Nx的显著性检测图,再将该图像归一化。Among them, U(a, b) is the average value of (a, b) pixel pair in the neighborhood of w × w, SalMap(a, b) is the saliency detection map of size N y × N x , and then The image is normalized.

(5)将SalMap与SalPre相结合,得到一幅红外目标更为突出,背景更为平缓的目标提取图像,具体公式为:(5) Combine SalMap and SalPre to obtain a target extraction image with a more prominent infrared target and a smoother background. The specific formula is:

SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)

其中,.*表示对应值相乘。Among them, .* represents the multiplication of corresponding values.

步骤S2.对可见光图像I1和红外图像I2进行NSCT分解,分别得到一个低频子带图像和一系列高频子带图像,公式如下:Step S2. Perform NSCT decomposition on the visible light image I 1 and the infrared image I 2 to obtain a low-frequency sub-band image and a series of high-frequency sub-band images respectively. The formula is as follows:

Figure BSA0000213942600000054
Figure BSA0000213942600000054

Figure BSA0000213942600000061
Figure BSA0000213942600000061

式中,

Figure BSA0000213942600000062
Figure BSA0000213942600000063
为源图像,
Figure BSA0000213942600000064
Figure BSA0000213942600000065
分别为可见光图像与红外图像的低频子带系数,
Figure BSA0000213942600000066
Figure BSA0000213942600000067
为高频子带系数,k为每级分解的方向数,本发明的实施例中,规定k=8,l为分解尺度数量,l采用如下公式计算,使其自适应于图像尺寸:In the formula,
Figure BSA0000213942600000062
and
Figure BSA0000213942600000063
is the source image,
Figure BSA0000213942600000064
and
Figure BSA0000213942600000065
are the low-frequency subband coefficients of the visible light image and the infrared image, respectively,
Figure BSA0000213942600000066
and
Figure BSA0000213942600000067
is the high-frequency subband coefficient, k is the number of directions for each level of decomposition, in the embodiment of the present invention, it is specified that k=8, l is the number of decomposition scales, and l is calculated by the following formula to make it adaptive to the image size:

l=[log2(min(Ny,Nx))-7]l=[log 2 (min(N y , N x ))-7]

式中,[]为向上取整。In the formula, [] is rounded up.

NSCT由两部分组成:非下采样金字塔(NSP)和非下采样方向滤波器(NSDFB),NSCT的分解示意图如图3所示。NSCT consists of two parts: non-subsampling pyramid (NSP) and non-subsampling directional filter (NSDFB). The decomposition diagram of NSCT is shown in Figure 3.

源图像经由NSP分解为低频子带与高频子带两部分,本层分解完之后对该层的低频图像继续进行NSP分解,经过N层分解之后,最终得到1个低频子带和(21+22+...+2N)个高频子带,如图4所示。The source image is decomposed into low-frequency sub-band and high-frequency sub-band through NSP. After this layer is decomposed, the low-frequency image of this layer continues to be decomposed by NSP. After N-layer decomposition, a low-frequency sub-band sum (2 1 +2 2 +...+2 N ) high frequency subbands, as shown in FIG. 4 .

NSDFB用于完成对高频子带图像的多方向分解,将同方向上的奇异点合成NSCT的系数,分解示意图如图5所示。NSDFB is used to complete the multi-directional decomposition of high-frequency sub-band images, and synthesize the singular points in the same direction into the coefficients of NSCT. The decomposition diagram is shown in Figure 5.

相比于传统CT算法,NSCT不仅保留了多方向性和各向异性,而且由于没有采用上采样和下采样,减少了采样过程中的失真,使该算法具有平移不变性。Compared with the traditional CT algorithm, NSCT not only retains multi-directionality and anisotropy, but also reduces the distortion in the sampling process because it does not use up-sampling and down-sampling, so that the algorithm has translation invariance.

步骤S3.对低频子带采用保持对比度的融合规则,具体步骤如下:Step S3. Adopt the fusion rule of maintaining contrast for the low frequency sub-band, and the specific steps are as follows:

(1)设

Figure BSA0000213942600000068
分别为可见光图像与红外图像低频子带系数的均值,首先对原低频子带系数进行如下处理:(1) set
Figure BSA0000213942600000068
are the mean values of the low-frequency sub-band coefficients of the visible light image and the infrared image, respectively. First, the original low-frequency sub-band coefficients are processed as follows:

Figure BSA0000213942600000069
Figure BSA0000213942600000069

Figure BSA00002139426000000610
Figure BSA00002139426000000610

(2)然后对处理后的系数w1、w2,我们设计了如下融合规则:(2) Then for the processed coefficients w 1 and w 2 , we design the following fusion rules:

w=(w2-w1)×0.5+0.5w=(w 2 -w 1 )×0.5+0.5

式中,w为最终低频子带融合系数;where w is the final low-frequency subband fusion coefficient;

Figure BSA00002139426000000611
Figure BSA00002139426000000611

式中,

Figure BSA00002139426000000612
为融合后的低频子带图像。In the formula,
Figure BSA00002139426000000612
is the fused low-frequency subband image.

步骤S4.对高频子带进行改进高斯差分的方法,融合步骤如下:Step S4. The method of improving the Gaussian difference for the high-frequency sub-band, and the fusion steps are as follows:

(1)设

Figure BSA00002139426000000613
分别为可见光图像与红外图像第l层第k个方向上的高频子带系数的均值,首先进行如下处理:(1) set
Figure BSA00002139426000000613
are the mean values of the high-frequency subband coefficients in the k-th direction of the lth layer of the visible light image and the infrared image, respectively. First, the following processing is performed:

Figure BSA00002139426000000614
Figure BSA00002139426000000614

Figure BSA00002139426000000615
Figure BSA00002139426000000615

式中,a1、a2为初步权重系数;In the formula, a 1 and a 2 are preliminary weight coefficients;

(2)再对原高频子带系数进行高斯滤波:(2) Gaussian filtering is performed on the original high-frequency subband coefficients:

Figure BSA0000213942600000071
Figure BSA0000213942600000071

Figure BSA0000213942600000072
Figure BSA0000213942600000072

式中,b1、b2为高斯滤波后的高频子带图像,G为用于平滑图像的高斯滤波器,其模板大小hsize为11×11,标准差sigma为5;In the formula, b 1 and b 2 are the high-frequency subband images after Gaussian filtering, G is the Gaussian filter used to smooth the image, the template size hsize is 11×11, and the standard deviation sigma is 5;

(3)最后,我们设定了以下融合规则:(3) Finally, we set the following fusion rules:

s1=b1-a1 s 1 =b 1 -a 1

s2=b2-a2 s 2 =b 2 -a 2

式中,s1、s2为最终融合权重,我们按照下式进行融合:In the formula, s 1 and s 2 are the final fusion weights, and we fuse them according to the following formula:

Figure BSA0000213942600000073
Figure BSA0000213942600000073

式中,

Figure BSA0000213942600000074
为融合后的高频子带系数。In the formula,
Figure BSA0000213942600000074
is the fused high-frequency subband coefficients.

将目标显著图映射到融合后的低频子带图像的公式为:The formula for mapping the target saliency map to the fused low-frequency subband image is:

Figure BSA0000213942600000075
Figure BSA0000213942600000075

步骤S5.对融合后的高低频子带系数进行NSCT逆变换后的融合图像子带系数为:Step S5. The fused image subband coefficients after performing NSCT inverse transformation on the fused high and low frequency subband coefficients are:

Figure BSA0000213942600000076
Figure BSA0000213942600000076

式中,fF(x,y)为融合后的完整图像。In the formula, f F (x, y) is the complete image after fusion.

本发明采用以下6种图像融合客观评价指标来验证融合方法的有效性。The present invention uses the following six objective evaluation indicators for image fusion to verify the effectiveness of the fusion method.

(1)互信息(Mutual Information,MI)(1) Mutual Information (MI)

互信息用于计算源图像有多少信息转移到了融合图像中,互信息值越大,融合效果越好。定义如下:Mutual information is used to calculate how much information from the source image is transferred to the fused image. The larger the mutual information value, the better the fusion effect. Defined as follows:

MI=MIAF+MIBF MI = MI AF + MI BF

Figure BSA0000213942600000077
Figure BSA0000213942600000077

Figure BSA0000213942600000078
Figure BSA0000213942600000078

式中,PA(a)和PB(b)分别为源图像A和B的边缘概率密度,PF(f)为融合图像F的概率密度。PFA(f,a)和PFB(f,b)分别为融合图像F与源图像A、B的联合概率密度。MIAF、MIBF为源图像与融合图像的互信息。where P A (a) and P B (b) are the edge probability densities of source images A and B, respectively, and P F (f) is the probability density of fusion image F. P FA (f, a) and P FB (f, b) are the joint probability densities of the fusion image F and the source images A and B, respectively. MI AF and MI BF are the mutual information between the source image and the fused image.

(2)平均梯度(Average Gradient,AG)(2) Average Gradient (AG)

平均梯度反映了融合图像对微小细节反差和纹理变化的表达能力,同时也反映了图像的清晰度。平均梯度值越大,融合图像越清晰。定义如下:The average gradient reflects the ability of the fused image to express small detail contrasts and texture changes, as well as the sharpness of the image. The larger the average gradient value, the clearer the fused image. Defined as follows:

Figure BSA0000213942600000081
Figure BSA0000213942600000081

式中,ΔIx=f(x,y)-f(x-1,y),ΔIy=f(x,y)-f(x,y-1)。In the formula, ΔI x =f(x, y)-f(x-1, y), and ΔI y =f(x, y)-f(x, y-1).

(3)灰度标准差(Standard Deviation,SD)(3) Gray standard deviation (Standard Deviation, SD)

灰度标准差反映了图像灰度值相对于灰度平均值的离散情况,标准差越大,灰度级分布越分散,图像反差越大,可利用的信息越多。定义如下:The gray standard deviation reflects the dispersion of the image gray value relative to the average gray value. The larger the standard deviation, the more scattered the gray level distribution, the greater the image contrast, and the more information available. Defined as follows:

Figure BSA0000213942600000082
Figure BSA0000213942600000082

式中,

Figure BSA0000213942600000083
为融合图像的灰度平均值。In the formula,
Figure BSA0000213942600000083
is the gray average value of the fused image.

(4)信息熵(Information Entropy,IE)(4) Information Entropy (Information Entropy, IE)

信息熵可以衡量图像信息的丰富程度,信息熵越大表示融合图像所包含的信息越丰富,融合效果越好。定义如下:Information entropy can measure the richness of image information. The larger the information entropy, the richer the information contained in the fused image, and the better the fusion effect. Defined as follows:

Figure BSA0000213942600000084
Figure BSA0000213942600000084

式中,L表示灰度级数,pi为每个灰度级的分布概率。In the formula, L represents the number of gray levels, and p i is the distribution probability of each gray level.

(5)空间频率(Spatial Frequency,SF)(5) Spatial Frequency (SF)

空间频率反映融合图像空间域的总体活跃度,可反映融合图像对微小细节反差的描述能力。SF越大融合图像越清晰。Spatial frequency reflects the overall activity of the spatial domain of the fused image, and can reflect the ability of the fused image to describe the contrast of small details. The larger the SF, the clearer the fused image.

Figure BSA0000213942600000085
Figure BSA0000213942600000085

其中,

Figure BSA0000213942600000086
in,
Figure BSA0000213942600000086

Figure BSA0000213942600000087
Figure BSA0000213942600000087

(6)视觉信息保真度(Visual Information Fidelity for Fusion,VIFF)(6) Visual Information Fidelity for Fusion (VIFF)

VIFF可以衡量源图像中的信息在融合后图像中的数量,也可以精确地反映融合后图像中畸变和增强的程度,VIFF值越大,图像融合效果越好。VIFF can measure the amount of information in the source image in the fused image, and can also accurately reflect the degree of distortion and enhancement in the fused image. The larger the VIFF value, the better the image fusion effect.

为了说明本发明的优越性,将本发明提出的方法与现有7种经典方法进行对比分析,包括:均值方法、离散小波变换方法(DWT)、主成分分析方法(PCA)、梯度金字塔变换方法(Grad)、基于各向异性扩散和卡尔霍恩-洛伊夫变换的方法(KL)、基于自适应稀疏表示的方法(ASR)以及基于四阶偏微分方程的多传感器图像融合方法(EDF)。In order to illustrate the superiority of the present invention, the method proposed by the present invention is compared and analyzed with 7 existing classical methods, including: mean method, discrete wavelet transform (DWT), principal component analysis (PCA), gradient pyramid transform method (Grad), methods based on anisotropic diffusion and Calhoun-Loeff transform (KL), methods based on adaptive sparse representation (ASR), and methods based on multi-sensor image fusion (EDF) based on fourth-order partial differential equations .

本发明提供两种实施例加以说明,采用上述6种客观评价指标对融合后的图像进行评价。采用Matlab软件进行仿真。The present invention provides two embodiments for illustration, and uses the above-mentioned six kinds of objective evaluation indexes to evaluate the fused image. The simulation is carried out using Matlab software.

实施例1:本实验采用的“Camp”图像如图6所示,图6(a)为可见光图像,图6(b)为红外图像。实验结果如表1所示:Example 1: The "Camp" image used in this experiment is shown in Fig. 6, Fig. 6(a) is a visible light image, and Fig. 6(b) is an infrared image. The experimental results are shown in Table 1:

表1“Camp”图像融合评价指标Table 1 "Camp" image fusion evaluation index

Figure BSA0000213942600000091
Figure BSA0000213942600000091

实施例2:本实验采用的“Trees”图像如图7所示,图7(a)为可见光图像,图7(b)为红外图像。实验结果如表2所示:Example 2: The "Trees" image used in this experiment is shown in Figure 7, Figure 7(a) is a visible light image, and Figure 7(b) is an infrared image. The experimental results are shown in Table 2:

表2“Trees”图像融合评价指标Table 2 "Trees" image fusion evaluation index

Figure BSA0000213942600000092
Figure BSA0000213942600000092

由表1和表2中的结果可知,与其他7种经典融合算法相比,本发明的融合方法在客观评价指标方面表现最优,尤其是灰度标准差(SD)、空间频率(SF)和视觉信息保真度(VIFF)这三个指标表现更突出,明显优于其他方法。也就是说,本发明方法在突出红外目标的同时又兼顾到了纹理细节的处理,更加有利于人眼观察,融合后图像质量更高。上述两个实施例中所选择的6种客观评价指标是从信息量、图像清晰度和视觉效果等不同角度对融合图像进行评价,评价结果全面,能够充分说明本发明方法的优越性。From the results in Tables 1 and 2, it can be seen that compared with the other seven classical fusion algorithms, the fusion method of the present invention has the best performance in terms of objective evaluation indicators, especially the gray standard deviation (SD) and spatial frequency (SF). and Visual Information Fidelity (VIFF), the three indicators perform more prominently and significantly outperform other methods. That is to say, the method of the present invention not only highlights the infrared target, but also takes into account the processing of texture details, which is more conducive to human eye observation, and the image quality after fusion is higher. The six objective evaluation indexes selected in the above two embodiments are to evaluate the fused image from different angles such as information amount, image clarity and visual effect. The evaluation results are comprehensive and can fully illustrate the superiority of the method of the present invention.

上面对本发明的实施方式做了详细说明。但是本发明并不限于上述实施方式,在所属技术领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。The embodiments of the present invention have been described in detail above. However, the present invention is not limited to the above-mentioned embodiments, and various changes can be made within the scope of knowledge possessed by those skilled in the art without departing from the spirit of the present invention.

Claims (6)

1.一种基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于包括以下步骤:1. a kind of infrared and visible light image fusion method based on gray level co-occurrence matrix, is characterized in that comprising the following steps: 步骤S1:对红外图像I2进行灰度共生矩阵分析,提取红外目标,得到目标显著图;Step S1: perform grayscale co-occurrence matrix analysis on the infrared image I2 , extract the infrared target, and obtain the target saliency map; 步骤S2:对可见光图像I1和红外图像I2进行NSCT分解,分别得到一个低频子带图像和一系列高频子带图像;Step S2: performing NSCT decomposition on the visible light image I 1 and the infrared image I 2 to obtain a low-frequency sub-band image and a series of high-frequency sub-band images respectively; 步骤S3:对所有低频子带图像进行保持对比度信息的融合,得到融合后的低频图像,采用改进高斯差分方法对所有高频子带进行融合,得到融合后的高频子带图像;Step S3: fuse all low-frequency sub-band images to maintain contrast information to obtain a fused low-frequency image, and use an improved Gaussian difference method to fuse all high-frequency sub-bands to obtain a fused high-frequency sub-band image; 步骤S4:将目标显著图映射至融合后的低频子带图像上;Step S4: mapping the target saliency map to the fused low-frequency subband image; 步骤S5:对融合后的低频子带和高频子带进行NSCT逆变换,得到融合后的图像。Step S5: Perform inverse NSCT transformation on the fused low-frequency sub-band and high-frequency sub-band to obtain a fused image. 2.如权利要求1所述的基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于:所述步骤S1中提取红外图像目标的方法如下:2. The infrared and visible light image fusion method based on grayscale co-occurrence matrix as claimed in claim 1, is characterized in that: the method for extracting infrared image target in described step S1 is as follows: (1)进行初步目标提取,将源红外图像的像素值与其灰度均值之差的绝对值作为初步目标提取图像SalPre;(1) Carry out preliminary target extraction, and take the absolute value of the difference between the pixel value of the source infrared image and its gray mean value as the preliminary target extraction image SalPre; (2)计算SalPre的灰度共生矩阵coMat,该矩阵是一个对称矩阵;若a、b均为SalPre图像的灰度值,则(a,b)为一个灰度值对;灰度共生矩阵中的一个元素值为每一个像素值a,在其大小为w的邻域范围内b像素值的个数,在本发明中取w=3;(2) Calculate the gray-level co-occurrence matrix coMat of SalPre, which is a symmetric matrix; if a and b are the gray-level values of the SalPre image, then (a, b) is a gray-level value pair; in the gray-level co-occurrence matrix The value of an element of is each pixel value a, and the number of pixel values b in the neighborhood range of its size w, in the present invention, take w=3; (3)对灰度共生矩阵coMat进行处理,得到修正灰度共生矩阵;具体包括以下步骤:首先,将灰度共生矩阵coMat进行归一化;再采用对数函数进行处理,得到Lp(a,b);最后,将矩阵中大于平均值的元素减去平均值,小于或者等于平均值的元素取零,进而得到修正灰度共生矩阵Sal(a,b);(3) Process the gray-level co-occurrence matrix coMat to obtain a modified gray-level co-occurrence matrix; specifically, it includes the following steps: first, normalize the gray-level co-occurrence matrix coMat; then use a logarithmic function to process to obtain L p (a , b); finally, subtract the average value from the elements in the matrix that are greater than the average value, and take zero for the elements less than or equal to the average value, and then obtain the modified grayscale co-occurrence matrix Sal(a, b); (4)按如下公式将修正灰度共生矩阵映射至初步目标提取图像中:(4) Map the modified grayscale co-occurrence matrix to the preliminary target extraction image according to the following formula:
Figure FSA0000213942590000011
Figure FSA0000213942590000011
Figure FSA0000213942590000012
Figure FSA0000213942590000012
式中,U(a,b)是(a,b)像素对在w×w邻域内的平均值,SalMap(a,b)是与源图像大小相同的显著性检测图,再将该图像归一化;In the formula, U(a, b) is the average value of (a, b) pixel pairs in the w×w neighborhood, SalMap(a, b) is the saliency detection map of the same size as the source image, and then the image is normalized. unification; (5)将SalMap与SalPre相结合,得到最终目标提取图像,具体公式为:(5) Combining SalMap and SalPre to obtain the final target extraction image, the specific formula is: SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)SalFinal(a,b)=SalMap(a,b).*SalPre(a,b) 其中,.*表示对应值相乘。Among them, .* represents the multiplication of corresponding values.
3.如权利要求1所述的基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于:所述步骤S2中NSCT分解规则如下:每级分解的方向数为8,分解尺度数量自适应于图像的尺寸,公式为:3. the infrared and visible light image fusion method based on gray level co-occurrence matrix as claimed in claim 1, it is characterized in that: in described step S2, NSCT decomposition rule is as follows: the number of directions decomposed at every level is 8, and the number of decomposition scales is adaptive Depending on the size of the image, the formula is: l=[log2(min(Ny,Nx))-7]l=[log 2 (min(N y , N x ))-7] 式中,l为分解尺度数量,图像的尺寸为Ny×Nx,[ ]为向上取整。In the formula, l is the number of decomposition scales, the size of the image is N y ×N x , and [ ] is rounded up. 4.如权利要求1所述的基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于:所述步骤S3中低频子带系数的融合规则设定如下:将可见光图像低频子带与红外图像低频子带的系数分别与其平均值做差,得到系数矩阵w1、w2;那么红外图像的融合权重矩阵为w=(w2-w1)×0.5+0.5,可见光图像的融合权重矩阵为1-w;最后将权重与低频子带相乘得到融合后的低频子带系数。4. the infrared and visible light image fusion method based on gray level co-occurrence matrix as claimed in claim 1, it is characterized in that: the fusion rule of the low frequency subband coefficient in described step S3 is set as follows: the visible light image low frequency subband and infrared The coefficients of the low-frequency subbands of the image are respectively compared with their average values to obtain coefficient matrices w 1 and w 2 ; then the fusion weight matrix of the infrared image is w=(w 2 -w 1 )×0.5+0.5, and the fusion weight matrix of the visible light image is 1-w; finally, the weight is multiplied by the low-frequency sub-band to obtain the fused low-frequency sub-band coefficient. 5.如权利要求1所述的基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于:所述步骤S3中高频子带系数融合规则设定如下:对每一层高频子带系数,将可见光图像高频子带与红外图像高频子带的系数分别与其平均值做差,得到初步融合系数a1、a2;再对原高频子带系数进行高斯滤波,其中,滤波模板大小为11×11,标准差为5;则滤波后的高频子带系数为b1、b2,可见光图像的融合权重为s1=b1-a1,同样地,红外图像的融合权重为s2=b2-a2;根据权重大小,选择融合后的高频系数。5. The infrared and visible light image fusion method based on gray level co-occurrence matrix as claimed in claim 1, it is characterized in that: in described step S3, the high frequency subband coefficient fusion rule is set as follows: for each layer of high frequency subband coefficients , the coefficients of the high-frequency sub-band of the visible light image and the high-frequency sub-band of the infrared image are respectively different from their average values to obtain the preliminary fusion coefficients a 1 , a 2 ; then Gaussian filtering is performed on the original high-frequency sub-band coefficients, wherein the filtering template The size is 11×11 and the standard deviation is 5; the filtered high-frequency subband coefficients are b 1 and b 2 , and the fusion weight of the visible light image is s 1 =b 1 -a 1 . Similarly, the fusion weight of the infrared image is is s 2 =b 2 -a 2 ; the fused high-frequency coefficients are selected according to the weight. 6.如权利要求1所述的基于灰度共生矩阵的红外与可见光图像融合方法,其特征在于:所述步骤S4中采用相加的方法将目标显著图映射到融合后的低频子带图像上。6. The infrared and visible light image fusion method based on gray-level co-occurrence matrix as claimed in claim 1, it is characterized in that: in described step S4, adopt the method of addition to map the target saliency map to the low-frequency subband image after fusion .
CN202010678896.4A 2020-07-04 2020-07-04 A method of infrared and visible light image fusion based on gray level co-occurrence matrix Active CN111815550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010678896.4A CN111815550B (en) 2020-07-04 2020-07-04 A method of infrared and visible light image fusion based on gray level co-occurrence matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010678896.4A CN111815550B (en) 2020-07-04 2020-07-04 A method of infrared and visible light image fusion based on gray level co-occurrence matrix

Publications (2)

Publication Number Publication Date
CN111815550A true CN111815550A (en) 2020-10-23
CN111815550B CN111815550B (en) 2023-09-15

Family

ID=72864784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010678896.4A Active CN111815550B (en) 2020-07-04 2020-07-04 A method of infrared and visible light image fusion based on gray level co-occurrence matrix

Country Status (1)

Country Link
CN (1) CN111815550B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112614082A (en) * 2020-12-17 2021-04-06 北京工业大学 Offshore medium-long wave infrared image fusion method
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN116698855A (en) * 2023-08-07 2023-09-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020931A (en) * 2012-11-27 2013-04-03 西安电子科技大学 Multisource image fusion method based on direction wavelet domain hidden Markov tree model
CN103455990A (en) * 2013-03-04 2013-12-18 深圳信息职业技术学院 Image fusion method with visual attention mechanism and PCNN combined
CN103914678A (en) * 2013-01-05 2014-07-09 中国科学院遥感与数字地球研究所 Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A method and system for processing aerial visible light images
CN108961154A (en) * 2018-07-13 2018-12-07 福州大学 Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109360182A (en) * 2018-10-31 2019-02-19 广州供电局有限公司 Image interfusion method, device, computer equipment and storage medium
CN109376750A (en) * 2018-06-15 2019-02-22 武汉大学 A Remote Sensing Image Classification Method Integrating Mid-Wave Infrared and Visible Light
AU2020100178A4 (en) * 2020-02-04 2020-03-19 Huang, Shuying DR Multiple decision maps based infrared and visible image fusion
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible image fusion method based on saliency map and convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020931A (en) * 2012-11-27 2013-04-03 西安电子科技大学 Multisource image fusion method based on direction wavelet domain hidden Markov tree model
CN103914678A (en) * 2013-01-05 2014-07-09 中国科学院遥感与数字地球研究所 Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN103455990A (en) * 2013-03-04 2013-12-18 深圳信息职业技术学院 Image fusion method with visual attention mechanism and PCNN combined
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A method and system for processing aerial visible light images
CN109376750A (en) * 2018-06-15 2019-02-22 武汉大学 A Remote Sensing Image Classification Method Integrating Mid-Wave Infrared and Visible Light
CN108961154A (en) * 2018-07-13 2018-12-07 福州大学 Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109360182A (en) * 2018-10-31 2019-02-19 广州供电局有限公司 Image interfusion method, device, computer equipment and storage medium
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible image fusion method based on saliency map and convolutional neural network
AU2020100178A4 (en) * 2020-02-04 2020-03-19 Huang, Shuying DR Multiple decision maps based infrared and visible image fusion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DONGMING WANG等: "A method based on an improved immune genetic algorithm for the feature fusion of the infrared and visible images", 《JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING》, vol. 18, pages 591 - 603 *
ZHANG JIAN等: "Non-Subsampled Contourlets and Gray Level Co-occurrence Matrix based Images Segmentation", 《2011 INTERNATIONAL CONFERENCE ON UNCERTAINTY REASONING AND KNOWLEDGE ENGINEERING》, pages 168 - 170 *
张人上: "基于NSCT-GLCM的CT图像特征提取算法", 《计算机工程与应用》, vol. 50, no. 11, pages 159 - 162 *
杨阳等: "主成分分析的红外与可见光图像特征融合", 《沈阳理工大学学报》, vol. 31, no. 4, pages 23 - 28 *
涂一枝等: "基于对比度增强与小波变换相结合的红外与可见光图像融合算法", 《淮阴师范学院学报(自然科学版)》, vol. 17, no. 3, pages 230 - 234 *
荣传振等: "红外与可见光图像分解与融合方法研究", 《数据采集与处理》, vol. 34, no. 1, pages 146 - 156 *
高印寒等: "基于图像质量评价参数的非下采样剪切波域自适应图像融合", 《吉林大学学报(工学版)》, vol. 44, no. 1, pages 225 - 234 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112614082A (en) * 2020-12-17 2021-04-06 北京工业大学 Offshore medium-long wave infrared image fusion method
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN116698855A (en) * 2023-08-07 2023-09-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve
CN116698855B (en) * 2023-08-07 2023-12-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve

Also Published As

Publication number Publication date
CN111815550B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111815550B (en) A method of infrared and visible light image fusion based on gray level co-occurrence matrix
CN109636766B (en) Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method
Li et al. Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method
CN107862249B (en) A method and device for identifying bifurcated palm prints
Tang et al. MdedFusion: A multi-level detail enhancement decomposition method for infrared and visible image fusion
CN104268833B (en) Image interfusion method based on translation invariant shearing wave conversion
CN113837974B (en) A method for infrared image enhancement of power equipment in NSST domain based on improved BEEPS filtering algorithm
CN104504664B (en) The automatic strengthening system of NSCT domains underwater picture based on human-eye visual characteristic and its method
CN108399611A (en) Multi-focus image fusing method based on gradient regularisation
CN106327459A (en) Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN113298147B (en) Image fusion method and device based on regional energy and intuitionistic fuzzy set
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN101853490A (en) A Bionic Image Restoration Method Based on Human Visual Characteristics
CN112184646B (en) An Image Fusion Method Based on Gradient Domain Oriented Filtering and Improved PCNN
CN114120176A (en) Behavior analysis method for fusion of far infrared and visible light video images
CN106530244A (en) Image enhancement method
Dharejo et al. A deep hybrid neural network for single image dehazing via wavelet transform
Chen et al. Blood vessel enhancement via multi-dictionary and sparse coding: Application to retinal vessel enhancing
CN102289670B (en) Image characteristic extraction method with illumination robustness
CN107403134A (en) The multiple dimensioned method for detecting infrared puniness target in figure domain based on the side of partial gradient three
CN106981059A (en) With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
CN103400360A (en) Multi-source image fusing method based on Wedgelet and NSCT (Non Subsampled Contourlet Transform)
CN114862710A (en) Infrared and visible light image fusion method and device
CN111507913A (en) An Image Fusion Algorithm Based on Texture Features
CN111768350A (en) Infrared image enhancement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant