CN111815550A - Infrared and visible light image fusion method based on gray level co-occurrence matrix - Google Patents

Infrared and visible light image fusion method based on gray level co-occurrence matrix Download PDF

Info

Publication number
CN111815550A
CN111815550A CN202010678896.4A CN202010678896A CN111815550A CN 111815550 A CN111815550 A CN 111815550A CN 202010678896 A CN202010678896 A CN 202010678896A CN 111815550 A CN111815550 A CN 111815550A
Authority
CN
China
Prior art keywords
image
infrared
gray level
band
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010678896.4A
Other languages
Chinese (zh)
Other versions
CN111815550B (en
Inventor
谭惜姿
郭立强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Normal University
Original Assignee
Huaiyin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Normal University filed Critical Huaiyin Normal University
Priority to CN202010678896.4A priority Critical patent/CN111815550B/en
Publication of CN111815550A publication Critical patent/CN111815550A/en
Application granted granted Critical
Publication of CN111815550B publication Critical patent/CN111815550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared and visible light image fusion method based on a gray level co-occurrence matrix. Firstly, carrying out gray level co-occurrence matrix analysis on an infrared source image to obtain an infrared target saliency map. Secondly, performing non-subsampled contourlet transform (NSCT) on the visible light source image and the infrared source image, performing fusion for keeping contrast on the decomposed low-frequency sub-band image, and fusing the high-frequency sub-band image by adopting an improved Gaussian difference method; and then mapping the target saliency map to the fused low-frequency subband image. And finally, performing NSCT inverse transformation to obtain a final fusion image. The method utilizes the texture analysis characteristic of the gray level co-occurrence matrix to detect the significance of the infrared target, can effectively extract the infrared target and reserve rich detail information, and improves the quality of the fused image. The objective evaluation index of the method provided by the invention is superior to that of the existing classical image fusion methods such as wavelet transformation, pyramid transformation and the like, and the method has strong robustness.

Description

Infrared and visible light image fusion method based on gray level co-occurrence matrix
Technical Field
The invention belongs to the technical field of multi-source image fusion, and particularly relates to an infrared and visible light image fusion method based on a gray level co-occurrence matrix.
Background
The infrared and visible light image fusion technology is an important part in image fusion research. The infrared image is a radiation image of infrared radiation emitted from an object and a background after being processed by a sensor, and can detect a hidden or disguised object. The visible light image records the characteristics of the visible light reflection of the object, including a large amount of detail and texture information, and conforms to the characteristics of human vision.
The purpose of the infrared and visible light image fusion is to obtain a complete image which not only contains abundant detail information, but also can accurately reflect an infrared target. Therefore, the technology is widely used for night imaging equipment to improve the night activity capability of people or machines, and in addition, the fused infrared and visible light images have the characteristics of accuracy, clearness and integrity, so that the technology is also applied to the fields of military reconnaissance, biological recognition, medical imaging, remote sensing and the like.
With the continuous development of computer and image processing technology, the most used fusion method at present is still pixel level fusion, and the method is divided into two categories: the spatial domain fusion and the transform domain fusion are typical algorithms of the former being principal component analysis methods, and the latter including: pyramidal transforms, wavelet transforms, and various multi-scale decomposition algorithms, such as methods based on non-subsampled contourlet transforms (NSCTs). Other methods such as Compressed Sensing (CS) and Sparse Representation (SR) are also included.
Among the above methods, the NSCT method has the characteristic of translational invariance, and can effectively overcome the pseudo gibbs phenomenon, which is an image analysis tool widely used at present. However, in practical applications of NSCT, the texture information of the fused image is often regarded as important, and the infrared target is ignored, or the infrared target is regarded as important, and the texture details are lost, and both important information cannot be considered at the same time.
Disclosure of Invention
The invention aims to provide an infrared and visible light image fusion method based on gray level co-occurrence matrix analysis and non-downsampling contourlet transformation, so that fused images can not only retain detail information, but also highlight an infrared target, and the fusion quality is improved.
In order to realize the fusion method, the invention combines the registered visible light image I1And infrared image I2The fusion is carried out, and both images are Ny×NxA grayscale image of the pixel.
The specific fusion steps are as follows:
step S1: for infrared image I2Carrying out gray level co-occurrence matrix analysis, and extracting an infrared target to obtain a target saliency map;
step S2: for visible light image I1And an infrared image I2Performing NSCT decomposition to respectively obtainTo a low frequency subband image and a series of high frequency subband images;
step S3: fusing all low-frequency sub-band images with contrast information kept to obtain fused low-frequency images, and fusing all high-frequency sub-bands by adopting an improved Gaussian difference method to obtain fused high-frequency sub-band images;
step S4: mapping the target saliency map to the fused low-frequency subband image;
step S5: and performing NSCT inverse transformation on the fused low-frequency sub-band and high-frequency sub-band to obtain a fused image.
Further, the method for extracting the infrared image target in step S1 is as follows:
(1) performing primary target extraction, and taking an absolute value of a difference between a pixel value of the source infrared image and a gray average value thereof as a primary target extraction image SalPre;
(2) a gray level co-occurrence matrix, coMat, of SalPre is calculated, which is a symmetric matrix. If a and b are both the gray values of the SalPre image, (a, b) is a gray value pair. One element value in the gray level co-occurrence matrix is each pixel value a, the number of the pixel values b in the neighborhood range with the size of w is 3;
(3) processing the gray level co-occurrence matrix coMat to obtain a corrected gray level co-occurrence matrix; the method specifically comprises the following steps: firstly, normalizing a gray level co-occurrence matrix (coMat); then processing by adopting a logarithmic function; finally, subtracting the average value from the elements which are larger than the average value in the matrix, and taking the elements which are smaller than or equal to the average value as zero to obtain a corrected gray level co-occurrence matrix Sal (a, b);
(4) mapping the corrected gray level co-occurrence matrix to the primary target extraction image according to the following formula:
Figure BSA0000213942600000021
Figure BSA0000213942600000022
wherein U (a, b) is an average value of the (a, b) pixel pair in the neighborhood of w × w, and SalMap (a, b) is a saliency detection map having the same size as the source image, and the image is normalized;
(5) combining the SalMap and the SalPre to obtain a target extraction image with a more prominent infrared target and a more gentle background, wherein the specific formula is as follows:
SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)
wherein denotes multiplication of the corresponding values.
Further, the NSCT decomposition rule in step S2 is as follows: the number of directions of each level of decomposition is 8, the number of decomposition scales is adaptive to the size of the image, and the formula is as follows:
l=[log2(min(Ny,Nx))-7]
wherein, l is the number of decomposition scales, [ ] is rounding up.
Further, the fusion rule of the low-frequency subband coefficients in step S3 is set as follows: the coefficients of the visible light image low-frequency sub-band and the infrared image low-frequency sub-band are respectively subtracted from the average value of the visible light image low-frequency sub-band and the infrared image low-frequency sub-band to obtain a coefficient matrix w1、w2Then the fusion weight matrix of the infrared image is w ═ w (w)2-w1) And multiplying the multiplied weight by the low-frequency subband by 0.5+0.5 to obtain a fused low-frequency subband coefficient, wherein the fusion weight matrix of the visible light image is 1-w.
Further, the high frequency subband coefficient fusion rule in step S3 is set as follows: for each layer of high-frequency sub-band coefficient, the coefficients of the visible light image high-frequency sub-band and the infrared image high-frequency sub-band are respectively differed with the average value thereof to obtain a primary fusion coefficient a1、a2(ii) a And performing Gaussian filtering on the original high-frequency subband coefficient, wherein the size of a filtering template is 11 multiplied by 11, the standard deviation is 5, and the filtered high-frequency subband coefficient is b1、b2The fusion weight of the visible light image is s1=b1-a1Similarly, the fusion weight of the infrared image is s2=b2-a2. And selecting the fused high-frequency coefficient according to the weight.
Further, in step S4, an addition method is used to map the target saliency map onto the fused low-frequency subband image.
Compared with the prior art, the invention has the following beneficial effects:
firstly, before two source images are fused, the method utilizes the texture analysis characteristic of the gray level co-occurrence matrix to detect the significance of the infrared target and map the infrared target into the fused image, and the step enables the infrared target in the source images to be more significant in the fused image and to be excellently kept in the fused image.
Secondly, the invention adopts a low-frequency fusion rule for keeping the contrast information, so that the fused image can better accord with the visual characteristics of human eyes.
Thirdly, aiming at the high-frequency sub-band image containing a large amount of detail information, the invention adopts the improved Gaussian difference algorithm, so that the detail information in the source image is more completely reserved, and the halo around the infrared target can be effectively reduced.
Finally, experiments show that the objective evaluation index of the image fusion method provided by the invention is superior to that of the existing popular image fusion methods such as wavelet transformation, pyramid transformation and the like, and the fused image is more in line with the requirements of human vision.
Drawings
FIG. 1 is a block diagram of the fusion process of the present invention;
FIG. 2 is a block diagram of an infrared target extraction process;
FIG. 3 is a NSCT structural framework diagram;
FIG. 4 is a block diagram of NSP after 3-dimensional decomposition;
FIG. 5 is a segmentation of the image frequency domain at 3-scale for NSDFB;
FIG. 6 is a "Camp" image of example 1 of the present invention, FIG. 6(a) is a visible light image, and FIG. 6(b) is an infrared image;
fig. 7 is a "Tree" image of example 2 of the present invention, fig. 7(a) is a visible light image, and fig. 7(b) is an infrared image.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples for the purpose of facilitating understanding and practicing the invention by those of ordinary skill in the art, it being understood that the examples described herein are for the purpose of illustration and explanation and are not to be construed as limiting the invention.
The flow block diagram of the infrared and visible light image fusion method based on the gray level co-occurrence matrix is shown in fig. 1. Firstly, extracting an infrared target from a source infrared image to obtain a target saliency map, and secondly, respectively carrying out non-subsampled contourlet transform (NSCT) multi-scale decomposition on the source infrared image and a visible light image to respectively obtain a low-frequency subband image and a series of high-frequency subband images; then, a fusion method for keeping contrast information is adopted for the low-frequency sub-band images, and a method for improving Gaussian difference is adopted for high-frequency sub-band coefficients to be fused, so that fused low-frequency sub-band images and a series of high-frequency sub-band images are obtained; thirdly, mapping the target saliency map onto the low-frequency subband image; finally, the fused image is obtained through NSCT inverse transformation. The method comprises the following specific steps:
step s1, performing infrared target extraction on a source infrared image to obtain a target saliency map, as shown in fig. 2, specifically including the following steps:
(1) and (5) extracting a primary target. Let I2For infrared images, mean (I)2) The gray level average value of the infrared image is obtained, and the preliminary target extraction image can be calculated according to the following formula:
SalPre=I2-mean(I2)
in the formula, SalPre is the preliminary target extraction image with the size of Ny×Nx
Through the preliminary extraction, although the contrast of the infrared target is enhanced, the background image still has a lot of interference to judge the object of the infrared target, and the infrared target needs to be further extracted.
(2) Let I2Is {0, 1, 2., Q-1}, and a gray level co-occurrence matrix of the SalPre image is calculated, as follows:
coMat=F(a,b)
in the formula, the coMat is a symmetric matrix with a size of Q × Q, a and b are gray values of the SalPre image, and (a and b) are a gray value pair. In the SalPre image, for each pixel value a, the number of b pixel values is calculated in the neighborhood range with the size of w (in the embodiment of the invention, w is 3), and the calculation result is stored in a coMat matrix.
(3) Processing the gray level co-occurrence matrix coMat to obtain a corrected gray level co-occurrence matrix; the method specifically comprises the following steps:
first, the coMat matrix is normalized, the formula is as follows:
Figure BSA0000213942600000041
where P (a, b) is the probability that the gray value pair (a, b) appears in the coMat matrix;
the gray level co-occurrence matrix (GLCM) refers to a joint probability distribution of two gray level pixels in an image that are separated by a distance d. For an image with the size of M multiplied by N, the gray level co-occurrence matrix calculation steps are as follows:
let the gray value of the pixel point located at (x, y) be g1And the gray value of the pixel point positioned at (x + i, y + j) is g2(ii) a Moving (x, y) over the entire image will result in a different (g)1,g2) Gray value pair, count each (g)1,g2) The number of occurrences may be all (g) if the gray scale level of the whole image is l1,g2) Is combined into an l x l matrix, finally with (g)1,g2) The total number of occurrences normalizes it to the probability of occurrence P (g)1,g2) Such a matrix is a gray level co-occurrence matrix.
For a scanning window of size 3 × 3, when i is 1 and j is 0, the pixel pair is horizontal, i.e. a 0 ° scan; when i is 1 and j is 1, the pixel pair is located on the upper right diagonal, i.e. 45 ° scanning, and so on, we can get the gray level co-occurrence matrix in a specific direction. In the method used in the present invention, 8 directions (0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °) are calculated. For an image with slow gray value change, such as a background image, the diagonal value of the gray level co-occurrence matrix is larger, and for an image with severe gray value change, such as detail and abrupt change, the diagonal value of the gray level co-occurrence matrix is smaller and the values on both sides are larger.
Secondly, since a smaller value in P (a, b) represents a more drastic change of the gray scale value, a larger value represents a slower change of the gray scale value, and a smaller value in P (a, b) is disadvantageous for the analysis, the logarithmization process is performed using the following formula:
Lp(a,b)=2×[-ln(P(a,b))]2
thus, LpThe values in (a, b) are proportional to the change in the gradation value, and the values are large.
Finally, in order to highlight the salient region, i.e., to increase the difference between the salient region and the background image, the following is specified:
Figure BSA0000213942600000051
in the formula, ENT is LpAverage of (a, b). This results in the modified gray-level co-occurrence matrix Sal (a, b).
(4) Mapping the corrected gray level co-occurrence matrix into a preliminary detection image SalPre:
Figure BSA0000213942600000052
Figure BSA0000213942600000053
where U (a, b) is the average of the (a, b) pixel pair in the w × w neighborhood, and SalMap (a, b) is of size Ny×NxAnd normalizing the image.
(5) Combining the SalMap and the SalPre to obtain a target extraction image with a more prominent infrared target and a more gentle background, wherein the specific formula is as follows:
SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)
wherein denotes multiplication of the corresponding values.
S2, aiming at the visible light image I1And an infrared image I2Performing NSCT decomposition to respectively obtain a low-frequency subband image and a series of high-frequency subband images, wherein the formula is as follows:
Figure BSA0000213942600000054
Figure BSA0000213942600000061
in the formula,
Figure BSA0000213942600000062
and
Figure BSA0000213942600000063
is a source image of the image,
Figure BSA0000213942600000064
and
Figure BSA0000213942600000065
low-frequency subband coefficients of the visible light image and the infrared image respectively,
Figure BSA0000213942600000066
and
Figure BSA0000213942600000067
in the embodiment of the present invention, k is defined as 8, l is the number of decomposition scales, and l is calculated by using the following formula, so that it is adaptive to the image size:
l=[log2(min(Ny,Nx))-7]
wherein [ ] is rounded upward.
NSCT consists of two parts: a non-downsampled pyramid (NSP) and a non-downsampled direction filter (NSDFB), the decomposition diagram of NSCT is shown in fig. 3.
Source image decomposition into low frequency sub-bands and via NSPAfter the decomposition of the layer is finished, the NSP decomposition is continuously carried out on the low-frequency image of the layer, and after the N-layer decomposition, 1 low-frequency subband sum (2) is finally obtained1+22+...+2N) The high frequency sub-bands, as shown in fig. 4.
The NSDFB is used to complete multi-directional decomposition of the high-frequency subband image, and synthesize the singular points in the same direction into coefficients of the NSCT, and the decomposition diagram is shown in fig. 5.
Compared with the traditional CT algorithm, the NSCT not only retains the multi-directivity and the anisotropy, but also reduces the distortion in the sampling process because the up-sampling and the down-sampling are not adopted, so that the algorithm has translation invariance.
S3, adopting a fusion rule for keeping the contrast ratio for the low-frequency sub-band, and specifically comprising the following steps:
(1) is provided with
Figure BSA0000213942600000068
Respectively taking the average values of the low-frequency subband coefficients of the visible light image and the infrared image, and firstly processing the original low-frequency subband coefficient as follows:
Figure BSA0000213942600000069
Figure BSA00002139426000000610
(2) then the processed coefficient w1、w2We have devised the following fusion rule:
w=(w2-w1)×0.5+0.5
in the formula, w is a final low-frequency subband fusion coefficient;
Figure BSA00002139426000000611
in the formula,
Figure BSA00002139426000000612
for the fused low-frequency sub-bandAnd (4) an image.
S4, a method for improving Gaussian difference of the high-frequency sub-band comprises the following fusion steps:
(1) is provided with
Figure BSA00002139426000000613
Respectively taking the average values of the high-frequency sub-band coefficients of the visible light image and the infrared image in the kth direction of the l layer, and firstly performing the following processing:
Figure BSA00002139426000000614
Figure BSA00002139426000000615
in the formula, a1、a2Is a preliminary weight coefficient;
(2) and performing Gaussian filtering on the original high-frequency sub-band coefficient:
Figure BSA0000213942600000071
Figure BSA0000213942600000072
in the formula, b1、b2The image is a high-frequency sub-band image after Gaussian filtering, G is a Gaussian filter for smoothing the image, the size hsize of the template is 11 multiplied by 11, and the standard deviation sigma is 5;
(3) finally, we set the following fusion rules:
s1=b1-a1
s2=b2-a2
in the formula, s1、s2To finally fuse the weights, we fuse according to the following formula:
Figure BSA0000213942600000073
in the formula,
Figure BSA0000213942600000074
the high-frequency subband coefficients after fusion.
The formula for mapping the target saliency map to the fused low-frequency subband image is as follows:
Figure BSA0000213942600000075
s5, performing NSCT inverse transformation on the fused high-frequency and low-frequency sub-band coefficients to obtain fused image sub-band coefficients:
Figure BSA0000213942600000076
in the formula (f)FAnd (x, y) is the fused complete image.
The invention adopts the following 6 image fusion objective evaluation indexes to verify the effectiveness of the fusion method.
(1) Mutual Information (Mutual Information, MI)
The mutual information is used for calculating how much information of the source image is transferred to the fusion image, and the larger the mutual information value is, the better the fusion effect is. The definition is as follows:
MI=MIAF+MIBF
Figure BSA0000213942600000077
Figure BSA0000213942600000078
in the formula, PA(a) And PB(b) Edge probability density, P, of source images A and B, respectivelyF(f) Is the probability density of the fused image F. PFA(f, a) and PFB(F, b) are the joint probability densities of the fused image F and the source image A, B, respectively. MIAF、MIBFAnd obtaining mutual information of the source image and the fused image.
(2) Average Gradient (AG)
The average gradient reflects the expressive power of the fused image on the contrast and texture change of the tiny details and also reflects the definition of the image. The larger the average gradient value, the sharper the fused image. The definition is as follows:
Figure BSA0000213942600000081
in the formula,. DELTA.Ix=f(x,y)-f(x-1,y),ΔIy=f(x,y)-f(x,y-1)。
(3) Standard Deviation (SD)
The gray standard deviation reflects the dispersion of the gray value of the image relative to the average value of the gray, the larger the standard deviation is, the more the gray level distribution is dispersed, the larger the image contrast is, and the more information can be utilized. The definition is as follows:
Figure BSA0000213942600000082
in the formula,
Figure BSA0000213942600000083
is the gray level average of the fused image.
(4) Information Entropy (IE, Information Entropy)
The information entropy can measure the richness of the image information, and the larger the information entropy is, the richer the information contained in the fused image is, and the better the fusion effect is. The definition is as follows:
Figure BSA0000213942600000084
wherein L represents the number of gray levels, piIs the probability of distribution for each gray level.
(5) Space Frequency (SF)
The spatial frequency reflects the overall activity of the spatial domain of the fused image and can reflect the description capability of the fused image on the contrast of tiny details. The larger the SF, the sharper the fused image.
Figure BSA0000213942600000085
Wherein,
Figure BSA0000213942600000086
Figure BSA0000213942600000087
(6) visual Information Fidelity (VIFF) for Fusion
The VIFF can measure the number of information in the source image in the fused image and can accurately reflect the distortion and enhancement degree in the fused image, and the larger the VIFF value is, the better the image fusion effect is.
To illustrate the superiority of the present invention, the method proposed by the present invention was compared with the existing 7 classical methods, including: mean method, discrete wavelet transform method (DWT), principal component analysis method (PCA), gradient pyramid transform method (Grad), method based on anisotropic diffusion and karhun-loevef transform (KL), method based on Adaptive Sparse Representation (ASR), and multi-sensor image fusion method based on fourth-order partial differential Equations (EDF).
The present invention provides two examples to illustrate, and the fused image is evaluated by using the above 6 objective evaluation indexes. And Matlab software is adopted for simulation.
Example 1: fig. 6 shows "Camp" images used in this experiment, where fig. 6(a) is a visible light image and fig. 6(b) is an infrared image. The results of the experiment are shown in table 1:
TABLE 1 evaluation index for "Camp" image fusion
Figure BSA0000213942600000091
Example 2: the "tress" image used in this experiment is shown in fig. 7, where fig. 7(a) is a visible light image and fig. 7(b) is an infrared image. The results of the experiment are shown in table 2:
TABLE 2 "Trees" image fusion evaluation index
Figure BSA0000213942600000092
As can be seen from the results in tables 1 and 2, compared with the other 7 classical fusion algorithms, the fusion method of the present invention performs the best objective evaluation indexes, and particularly, the three indexes of the gray Standard Deviation (SD), the Spatial Frequency (SF), and the visual information fidelity (VIFF) perform more prominently and are significantly better than the other methods. That is to say, the method of the invention gives attention to the processing of the texture details while highlighting the infrared target, is more beneficial to the observation of human eyes, and has higher image quality after fusion. The 6 objective evaluation indexes selected in the two embodiments evaluate the fused image from different angles such as information content, image definition, visual effect and the like, the evaluation result is comprehensive, and the superiority of the method can be fully explained.
The embodiments of the present invention have been described in detail. However, the present invention is not limited to the above-described embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (6)

1. An infrared and visible light image fusion method based on a gray level co-occurrence matrix is characterized by comprising the following steps:
step S1: for infrared image I2Carrying out gray level co-occurrence matrix analysis, and extracting an infrared target to obtain a target saliency map;
step S2: for visible light image I1And an infrared image I2Performing NSCT decomposition to respectively obtain a low-frequency sub-band image and a series of high-frequency sub-band images;
step S3: fusing all low-frequency sub-band images with contrast information kept to obtain fused low-frequency images, and fusing all high-frequency sub-bands by adopting an improved Gaussian difference method to obtain fused high-frequency sub-band images;
step S4: mapping the target saliency map to the fused low-frequency subband image;
step S5: and performing NSCT inverse transformation on the fused low-frequency sub-band and high-frequency sub-band to obtain a fused image.
2. The infrared and visible light image fusion method based on the gray level co-occurrence matrix according to claim 1, characterized in that: the method for extracting the infrared image target in step S1 is as follows:
(1) performing primary target extraction, and taking an absolute value of a difference between a pixel value of the source infrared image and a gray average value thereof as a primary target extraction image SalPre;
(2) calculating a gray level co-occurrence matrix (Cooma) of the SalPre, wherein the matrix is a symmetric matrix; if a and b are both the gray values of the SalPre image, the (a, b) is a gray value pair; one element value in the gray level co-occurrence matrix is each pixel value a, the number of the pixel values b in the neighborhood range with the size of w is 3;
(3) processing the gray level co-occurrence matrix coMat to obtain a corrected gray level co-occurrence matrix; the method specifically comprises the following steps: firstly, normalizing a gray level co-occurrence matrix (coMat); then processing by using a logarithmic function to obtain Lp(a, b); finally, subtracting the average value from the elements which are larger than the average value in the matrix, and taking the elements which are smaller than or equal to the average value as zero to obtain a corrected gray level co-occurrence matrix Sal (a, b);
(4) mapping the corrected gray level co-occurrence matrix to the primary target extraction image according to the following formula:
Figure FSA0000213942590000011
Figure FSA0000213942590000012
wherein U (a, b) is an average value of the (a, b) pixel pair in the neighborhood of w × w, and SalMap (a, b) is a saliency detection map having the same size as the source image, and the image is normalized;
(5) combining the SalMap and the SalPre to obtain a final target extraction image, wherein the specific formula is as follows:
SalFinal(a,b)=SalMap(a,b).*SalPre(a,b)
wherein denotes multiplication of the corresponding values.
3. The infrared and visible light image fusion method based on the gray level co-occurrence matrix according to claim 1, characterized in that: the NSCT decomposition rule in step S2 is as follows: the number of directions of each level of decomposition is 8, the number of decomposition scales is adaptive to the size of the image, and the formula is as follows:
l=[log2(min(Ny,Nx))-7]
where l is the number of decomposition scales and the size of the image is Ny×Nx,[]Is rounded up.
4. The infrared and visible light image fusion method based on the gray level co-occurrence matrix according to claim 1, characterized in that: the fusion rule of the low-frequency subband coefficients in step S3 is set as follows: the coefficients of the visible light image low-frequency sub-band and the infrared image low-frequency sub-band are respectively subtracted from the average value of the visible light image low-frequency sub-band and the infrared image low-frequency sub-band to obtain a coefficient matrix w1、w2(ii) a The fusion weight matrix of the infrared image is w ═ w (w)2-w1) X is 0.5+0.5, and the fusion weight matrix of the visible light image is 1-w; and finally, multiplying the weight by the low-frequency sub-band to obtain a fused low-frequency sub-band coefficient.
5. The infrared and visible light image fusion method based on the gray level co-occurrence matrix according to claim 1, characterized in that: the high frequency subband coefficient fusion rule in step S3 is set as follows: for each layer of high-frequency sub-band coefficient, the coefficients of the visible light image high-frequency sub-band and the infrared image high-frequency sub-band are respectively differed with the average value thereof to obtain a primary fusion coefficient a1、a2(ii) a Performing Gaussian filtering on the original high-frequency sub-band coefficient, wherein the size of a filtering template is 11 multiplied by 11, and the standard deviation is 5; the filtered high frequency subband coefficient is b1、b2The fusion weight of the visible light image is s1=b1-a1Similarly, the fusion weight of the infrared image is s2=b2-a2(ii) a And selecting the fused high-frequency coefficient according to the weight.
6. The infrared and visible light image fusion method based on the gray level co-occurrence matrix according to claim 1, characterized in that: in step S4, the target saliency map is mapped onto the fused low-frequency subband image by an addition method.
CN202010678896.4A 2020-07-04 2020-07-04 Infrared and visible light image fusion method based on gray level co-occurrence matrix Active CN111815550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010678896.4A CN111815550B (en) 2020-07-04 2020-07-04 Infrared and visible light image fusion method based on gray level co-occurrence matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010678896.4A CN111815550B (en) 2020-07-04 2020-07-04 Infrared and visible light image fusion method based on gray level co-occurrence matrix

Publications (2)

Publication Number Publication Date
CN111815550A true CN111815550A (en) 2020-10-23
CN111815550B CN111815550B (en) 2023-09-15

Family

ID=72864784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010678896.4A Active CN111815550B (en) 2020-07-04 2020-07-04 Infrared and visible light image fusion method based on gray level co-occurrence matrix

Country Status (1)

Country Link
CN (1) CN111815550B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112614082A (en) * 2020-12-17 2021-04-06 北京工业大学 Offshore medium-long wave infrared image fusion method
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN116698855A (en) * 2023-08-07 2023-09-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020931A (en) * 2012-11-27 2013-04-03 西安电子科技大学 Multisource image fusion method based on direction wavelet domain hidden Markov tree model
CN103455990A (en) * 2013-03-04 2013-12-18 深圳信息职业技术学院 Image fusion method with visual attention mechanism and PCNN combined
CN103914678A (en) * 2013-01-05 2014-07-09 中国科学院遥感与数字地球研究所 Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN108961154A (en) * 2018-07-13 2018-12-07 福州大学 Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109360182A (en) * 2018-10-31 2019-02-19 广州供电局有限公司 Image interfusion method, device, computer equipment and storage medium
CN109376750A (en) * 2018-06-15 2019-02-22 武汉大学 A kind of Remote Image Classification merging medium-wave infrared and visible light
AU2020100178A4 (en) * 2020-02-04 2020-03-19 Huang, Shuying DR Multiple decision maps based infrared and visible image fusion
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible light image fusion method based on saliency map and convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020931A (en) * 2012-11-27 2013-04-03 西安电子科技大学 Multisource image fusion method based on direction wavelet domain hidden Markov tree model
CN103914678A (en) * 2013-01-05 2014-07-09 中国科学院遥感与数字地球研究所 Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN103455990A (en) * 2013-03-04 2013-12-18 深圳信息职业技术学院 Image fusion method with visual attention mechanism and PCNN combined
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN109376750A (en) * 2018-06-15 2019-02-22 武汉大学 A kind of Remote Image Classification merging medium-wave infrared and visible light
CN108961154A (en) * 2018-07-13 2018-12-07 福州大学 Based on the solar cell hot spot detection method for improving non-down sampling contourlet transform
CN109242888A (en) * 2018-09-03 2019-01-18 中国科学院光电技术研究所 Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN109360182A (en) * 2018-10-31 2019-02-19 广州供电局有限公司 Image interfusion method, device, computer equipment and storage medium
CN111179208A (en) * 2019-12-09 2020-05-19 天津大学 Infrared-visible light image fusion method based on saliency map and convolutional neural network
AU2020100178A4 (en) * 2020-02-04 2020-03-19 Huang, Shuying DR Multiple decision maps based infrared and visible image fusion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DONGMING WANG等: "A method based on an improved immune genetic algorithm for the feature fusion of the infrared and visible images", 《JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING》, vol. 18, pages 591 - 603 *
ZHANG JIAN等: "Non-Subsampled Contourlets and Gray Level Co-occurrence Matrix based Images Segmentation", 《2011 INTERNATIONAL CONFERENCE ON UNCERTAINTY REASONING AND KNOWLEDGE ENGINEERING》, pages 168 - 170 *
张人上: "基于NSCT-GLCM的CT图像特征提取算法", 《计算机工程与应用》, vol. 50, no. 11, pages 159 - 162 *
杨阳等: "主成分分析的红外与可见光图像特征融合", 《沈阳理工大学学报》, vol. 31, no. 4, pages 23 - 28 *
涂一枝等: "基于对比度增强与小波变换相结合的红外与可见光图像融合算法", 《淮阴师范学院学报(自然科学版)》, vol. 17, no. 3, pages 230 - 234 *
荣传振等: "红外与可见光图像分解与融合方法研究", 《数据采集与处理》, vol. 34, no. 1, pages 146 - 156 *
高印寒等: "基于图像质量评价参数的非下采样剪切波域自适应图像融合", 《吉林大学学报(工学版)》, vol. 44, no. 1, pages 225 - 234 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112614082A (en) * 2020-12-17 2021-04-06 北京工业大学 Offshore medium-long wave infrared image fusion method
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN116698855A (en) * 2023-08-07 2023-09-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve
CN116698855B (en) * 2023-08-07 2023-12-05 东莞市美格精密制造有限公司 Production quality detection method for liquid injection pneumatic valve

Also Published As

Publication number Publication date
CN111815550B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111815550B (en) Infrared and visible light image fusion method based on gray level co-occurrence matrix
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
CN111583123A (en) Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
CN112233026A (en) SAR image denoising method based on multi-scale residual attention network
CN102800074B (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN113837974B (en) NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm
CN103295204B (en) A kind of image self-adapting enhancement method based on non-down sampling contourlet transform
CN108932699B (en) Three-dimensional matching harmonic filtering image denoising method based on transform domain
CN114120176B (en) Behavior analysis method for fusing far infrared and visible light video images
CN104036455B (en) Infrared image detail enhancement method based on second-generation wavelet
CN104504664B (en) The automatic strengthening system of NSCT domains underwater picture based on human-eye visual characteristic and its method
CN111768350B (en) Infrared image enhancement method and system
CN113298147A (en) Image fusion method and device based on regional energy and intuitionistic fuzzy set
CN114612359A (en) Visible light and infrared image fusion method based on feature extraction
CN103400360A (en) Multi-source image fusing method based on Wedgelet and NSCT (Non Subsampled Contourlet Transform)
CN112215787B (en) Infrared and visible light image fusion method based on significance analysis and adaptive filter
CN113592729A (en) Infrared image enhancement method for electrical equipment based on NSCT domain
CN115457249A (en) Method and system for fusing and matching infrared image and visible light image
CN117726537A (en) SAR image denoising network method and system for self-adaptive multi-scale feature fusion AMFFD-Net
CN118229548A (en) Infrared and visible light image fusion method based on progressive multi-branching and improved UNet3+ deep supervision
Zhong et al. A fusion approach to infrared and visible images with Gabor filter and sigmoid function
Jian et al. Towards reliable object representation via sparse directional patches and spatial center cues
CN111539894A (en) Novel image enhancement method
CN110298807A (en) Based on the domain the NSCT infrared image enhancing method for improving Retinex and quantum flora algorithm
Tun et al. Joint Training of Noisy Image Patch and Impulse Response of Low-Pass Filter in CNN for Image Denoising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant