CN114066786A - Infrared and visible light image fusion method based on sparsity and filter - Google Patents

Infrared and visible light image fusion method based on sparsity and filter Download PDF

Info

Publication number
CN114066786A
CN114066786A CN202010768408.9A CN202010768408A CN114066786A CN 114066786 A CN114066786 A CN 114066786A CN 202010768408 A CN202010768408 A CN 202010768408A CN 114066786 A CN114066786 A CN 114066786A
Authority
CN
China
Prior art keywords
image
frequency layer
infrared
visible light
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010768408.9A
Other languages
Chinese (zh)
Inventor
李启磊
周杨
杨晓敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010768408.9A priority Critical patent/CN114066786A/en
Publication of CN114066786A publication Critical patent/CN114066786A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for fusing an infrared image and a visible light image, which comprises the following steps: (1) randomly extracting image blocks with a certain size from the 3 groups of infrared image and visible light image pairs to obtain a training image sample group, training the image sample group and obtaining a learning dictionary; (2) respectively inputting test infrared images and visible light images, calculating to obtain sparse coefficients, storing the sparse coefficients, and then combining the sparse coefficients and a learning dictionary to obtain an initial weight map; (3) decomposing the infrared and visible light images by using an average filter to obtain a high-frequency layer and a low-frequency layer of the images; (4) optimizing the initial weight map by adopting a guide filtering operation; (5) respectively fusing a high-frequency layer and a low-frequency layer of the infrared and visible light image pair according to the optimized weight graph; (6) and calculating to obtain a final fusion image according to the fused high-frequency layer and low-frequency layer. The invention has excellent effect on fusing the infrared image and the visible light image, and the fused result has abundant detail and gradient information.

Description

Infrared and visible light image fusion method based on sparsity and filter
Technical Field
The invention belongs to the technical field of computer vision, and relates to a method for fusing an infrared image and a visible light image, in particular to the fusion of the infrared image and the visible light image in the same scene.
Background
Infrared and visible light images play an important role in transportation systems. Infrared images are obtained from the thermal radiation emitted by objects, which are less affected by weather and light conditions. However, background information in infrared images is often missing. In contrast, an image of visible light contains more texture information, however it is very susceptible to the imaging environment and lighting conditions. Infrared and visible image fusion techniques may fuse infrared and visible image pairs into one image. The fused image contains texture information from the visible image and thermal radiation details from the infrared image, thereby facilitating human observation and computer analysis. According to the different processing fields, the current main infrared and visible light image fusion methods can be divided into two types: a spatial domain method and a transform domain method. The spatial domain method directly fuses infrared and visible light image pairs through a fusion rule. A typical fusion method is to average the infrared image and the visible image. Unfortunately, this approach tends to produce fused images that are unsatisfactory. To address this problem, one decomposes the infrared and visible light image pairs into a base layer and a detail layer using a guided filter, and then fuses them using a guided filter. However, this method does not provide accurate activity level measurements. The activity level is measured from the image gradient. Some methods have proposed a method of fusing and blending multi-scale decomposed images by combining a gaussian filter with a bilateral filter, but the decomposition process of this method consumes a lot of time. Recently, some image fusion methods based on deep learning are proposed, i.e., using convolutional neural networks for image fusion. In order to improve the performance, the Laplacian pyramid is adopted to carry out multi-scale decomposition, and the self-similarity of the image is utilized to optimize the network model. Following different directions, an infrared and visible image fusion model based on generation of a countermeasure network has been proposed. Although these spatial domain methods can achieve good fusion, they also have many negative effects. They can result in overly smooth transitions at the edges of the fused image, reducing contrast, and spectral distortion of the image. For transform domain methods, multi-scale decomposition is a powerful tool that is relatively widely used, including gradient pyramids, laplacian pyramids, discrete wavelet transforms, dual-tree complex wavelet transforms, and low-pass pyramids. In addition, several geometric analysis tools are widely applied to image fusion, such as curve transformation and multi-modal image fusion, and non-downsampling contour transformation is used for decomposing a source image. Sparse representations are commonly used for machine vision tasks such as image super-resolution, face recognition and target detection. For image fusion, the general process is to compute the sparse coefficients of the source image and then fuse the coefficients through a specific rule. The latter is to convert the fusion coefficients into a result image. However, there are still some disadvantages to this type of process. Since these methods fuse coefficients in the transform domain, small changes in the coefficients may result in large changes in the spatial domain. Thus, this method suffers from severe undesirable artifacts.
Aiming at the problems, an infrared and visible light image fusion method based on sparsity and a filter is provided.
Disclosure of Invention
The present invention is directed to solving the above problems by providing an efficient infrared and visible image fusion method.
The invention realizes the purpose through the following technical scheme:
1. the infrared and visible light image fusion method comprises the following steps:
(1) randomly extracting image blocks with a certain size from the 3 groups of infrared images and visible light images to obtain a training image sample group, training the image sample group and obtaining a learning dictionary D;
(2) measuring activity level A of infrared and visible images with a trained learning dictionary DiI is (1,2), thereby obtaining a score map SiAnd i is equal to (1,2), and then the current pixel point is judged according to the score map to obtain an initial weight map Wi,i=(1,2);
(3) Low frequency layer LFL of image obtained by decomposing infrared and visible light images by using average filteri=g(Ii) I ═ 1,2, and high frequency layer HFLi=g(Ii),i=1,2;
(4) For the initial weight map WiAnd i-1, 2 is optimized by adopting a guide filtering operation to obtain an optimized weight map
Figure BDA0002615563840000031
(5) According to the optimized weight map
Figure BDA0002615563840000032
Low frequency layer fusing infrared and visible light image pairs respectively
Figure BDA0002615563840000033
And a high frequency layer
Figure BDA0002615563840000034
(6) Combined with low-frequency layer
Figure BDA0002615563840000035
And a high frequency layer
Figure BDA0002615563840000036
A fused image can be obtained.
The basic principle of the method is as follows:
the method comprises the steps of firstly, calculating a related sparse coefficient of an input test image through a trained learning dictionary. These coefficients are used to measure the activity level of the input image pair, and then an initial weight map is derived from the activity level. The infrared and visible images are decomposed using an averaging filter to obtain a low frequency layer and a high frequency layer of the image. And optimizing the initial weight map by using a guide filter to generate an optimized weight map. And respectively fusing a low-frequency layer and a high-frequency layer of the infrared and visible light image pairs according to the optimized weight graph. And finally, combining the fused low-frequency layer and the fused high-frequency layer to obtain a fused image.
Specifically, in step (1), three infrared and visible light image pairs are used for calculating the overcomplete dictionary. A number of image patches of size n x n are sampled from the training image. By solving the following equation, an overcomplete dictionary D can be obtained:
Figure BDA0002615563840000037
wherein y isi∈Rn,D∈Rn×kIs a dictionary of k atoms that,
Figure BDA0002615563840000038
is a sparse coefficient and t is the number of non-zero elements. Then, the dictionary D is used to calculate the sparse coefficients for the input test infrared and visible light image pairs. A sliding window of size n x n is used to sample the patch pixel by pixel from the test image. Using the dictionary D, these patches can learn the coefficients by the following formula:
Figure BDA0002615563840000039
wherein Y isi(i ═ 1,2) is an input image patch (i.e., an infrared image or a visible image),
Figure BDA00026155638400000310
is a computed sparse coefficient and σ is a predefined constant.
In the step (2), the activity level A of the infrared and visible light images is measured by using the trained learning dictionary DiI ═ 1,2), and then, the sparse coefficient was calculated by the following formula
Figure BDA0002615563840000041
L of0Norm to measure activity level:
Figure BDA0002615563840000042
where the coefficients of the ith source image representing the jth patch, n is the total number of patches, and A is the computed activity map. Defining the element reshaping function as f (·), the score map SiCan be obtained using the following formula:
Si=f(Ai),i=(1,2)
wherein f (-) is an element reshaping function, and the following formula is utilized to obtain an initial weight graph Wi,i=(1,2):
Figure BDA0002615563840000043
W2(x,y)=1-W1(x,y)
Wherein S is1And S2Respectively, represent a score map from a source test sample image.
In the step (3), the low-frequency layer LFL of the image is obtained by decomposing the infrared and visible light images by using an average filteri=g(Ii) I ═ 1,2, and high frequency layer HFLi=g(Ii) If i is 1,2 and the operator for the averaging filter operation is defined as g (·), the low-frequency layer and the high-frequency layer of the input image can be obtained by the following equations:
LFLi=g(Ii)i=1,2
HFLi=Ii-LFLi i=1,2
wherein g (-) denotes the average filtering operation, IiRepresenting a source image, i.e. an infrared image or a visible light image.
In the step (4), an initial weight map W is processediIf i ═ 1,2 is optimized by using a pilot filtering operation, and the pilot filtering operation is defined as an operator GF (·), the optimized weight map can be expressed as follows:
Figure BDA0002615563840000044
Figure BDA0002615563840000045
wherein r is11(i-1, 2) are parameters of the pilot filter,
Figure BDA0002615563840000046
and
Figure BDA0002615563840000047
optimized weight maps for the high frequency layer and the low frequency layer, respectively.
In the step (5), according to the optimized weight map
Figure BDA0002615563840000051
And a low frequency layer for decomposing the input image to obtain a low frequency layer and a high frequency layer to obtain a fused infrared and visible light image pair
Figure BDA0002615563840000052
And a high frequency layer
Figure BDA0002615563840000053
This process can be represented by the following equation:
Figure BDA0002615563840000054
Figure BDA0002615563840000055
wherein
Figure BDA0002615563840000056
It is shown that the multiplication is at the element level,
Figure BDA0002615563840000057
and
Figure BDA0002615563840000058
respectively representing the fused low frequency layer and the high frequency layer.
In the step (6), the combination is performed according to the following formula
Figure BDA0002615563840000059
And
Figure BDA00026155638400000510
a fused image F can be obtained:
Figure BDA00026155638400000511
the invention has the beneficial effects that:
the traditional infrared and visible light image fusion method based on multi-scale decomposition needs to decompose a source image into more than two layers. This may result in significant computational costs. The present invention only requires the decomposition of the infrared and visible light image pair into a low frequency layer and a high frequency layer. Thus, the present invention has a lower computational complexity, since it does not require re-dimensioning. Second, the present invention utilizes sparse coefficients to measure activity levels. This takes full advantage of the image characteristics and avoids large negative effects on the measured activity level caused by inaccuracies, such as ringing, occlusion and edge blurring. In addition, the guide filters are respectively used for optimizing the weight mapping of the low-frequency layer and the high-frequency layer in the invention. By doing so, the representative information in the infrared and visible light images can be well preserved. Therefore, the effect of visually fusing images can be enhanced.
Drawings
FIG. 1-1 is an overall framework diagram in an embodiment of the invention;
FIGS. 1-2 are illustrations of one of an infrared image of a training set sample in an embodiment of the present invention;
FIGS. 1-3 illustrate a second example of an infrared image of a sample in a training set in accordance with an embodiment of the present invention;
FIGS. 1-4 illustrate a third example of an infrared image of a sample in a training set in accordance with an embodiment of the present invention;
FIGS. 1-5 are one of visible light images of a training set sample in an embodiment of the present invention;
FIGS. 1-6 show a second visible light image of a training set sample in accordance with an embodiment of the present invention;
FIGS. 1-7 are three visible light images of a training set sample in accordance with an embodiment of the present invention;
FIG. 2-1 is one of the image blocks extracted from the training set samples in an embodiment of the present invention;
FIG. 2-2 is a learning dictionary D obtained by training image sample sets according to an embodiment of the present invention;
FIG. 3-1 is one of the infrared images of the test sample in an embodiment of the present invention;
FIG. 3-2 shows a second infrared image of the test sample in an embodiment of the present invention;
3-3 are three infrared images of a test sample in an embodiment of the present invention;
FIGS. 3-4 are four infrared images of a test specimen in an embodiment of the present invention;
FIGS. 3-5 are one of the visible light images of the test specimen in an embodiment of the present invention;
FIGS. 3-6 show a second visible light image of the test specimen in accordance with one embodiment of the present invention;
FIGS. 3-7 are three visible light images of a test specimen in accordance with an embodiment of the present invention;
FIGS. 3-8 are four visible light images of a test specimen in an embodiment of the present invention;
FIG. 4-1 is one of the fused images of the test sample in an embodiment of the present invention;
FIG. 4-2 shows a second fused image of a test sample according to an embodiment of the present invention;
FIGS. 4-3 illustrate a third fused image of a test sample in an embodiment of the present invention;
fig. 4-4 are four of the fused images of the test sample in the embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples and the accompanying drawings:
example (b):
in order to make the fusion method of the present invention more easily understood and approximate to the real application, the whole process is described from the beginning of dictionary learning of the original training samples to the completion of image fusion, wherein the whole process includes the core fusion method of the present invention, and the general frame diagram is shown in fig. 1-1:
(1) fig. 1-2, 1-3, and 1-4 are infrared images of an original training sample, and fig. 1-5, 1-6, and 1-7 are infrared images of an original training sample. Randomly extracting image blocks with a certain size from the training sample images to obtain a sample set, in this example, setting the size of the image blocks to be 8 × 8, as shown in fig. 2-1. The image sample set is trained again and a learning dictionary D is obtained, with the dictionary size set to 64 × 512, see fig. 2-2. By solving the following equation, an overcomplete dictionary D can be obtained:
Figure BDA0002615563840000071
wherein y isi∈Rn,D∈Rn×kIs a dictionary of k atoms that,
Figure BDA0002615563840000072
is a sparse coefficient and t is the number of non-zero elements;
(2) and (3) calculating the sparse coefficient of the input test image by using the dictionary updated after training in the step (1). In the test sample sparse coding stage, we use a sliding window with the same size as the image block used in the training stage to select 4 groups of test sample infrared and visible light image pairs, such as fig. 3-1 and 3-5, fig. 3-2 and 3-6, fig. 3-3 and 3-7, and fig. 3-4 and 3-8, and sample the input image set pixel by pixel from the image pairs of the test samples, and can learn the sparse coefficients by the following formula:
Figure BDA0002615563840000073
wherein Y isi(i ═ 1,2) is an input image patch (i.e., an infrared image or a visible image),
Figure BDA0002615563840000074
is the calculated sparse coefficient, σ is a predefined constant, and σ is set to 5 in the present invention. Measuring activity level A of infrared and visible images with a trained learning dictionary DiI is (1,2), and then, the sparse coefficient obtained by the calculation in the step (2) is used as the sparse coefficient
Figure BDA0002615563840000075
L of0Norm to measure activity level:
Figure BDA0002615563840000076
where the coefficients of the ith source image representing the jth patch, n is the total number of patches, and A is the computed activity map. Defining the element reshaping function as f (·), the score map SiCan be obtained using the following formula:
Si=f(Ai),i=(1,2)
wherein f (-) is an element reshaping function, and the following formula is utilized to obtain an initial weight graph Wi,i=(1,2):
Figure BDA0002615563840000081
W2(x,y)=1-W1(x,y)
Wherein S is1And S2Respectively, represent a score map from a source test sample image.
(3) Low frequency layer LFL of image obtained by decomposing 4 groups of test infrared and visible light image pairs by using average filteri=g(Ii) I ═ 1,2, and high frequency layer HFLi=g(Ii) If i is 1,2 and the operator for the averaging filter operation is defined as g (·), the low-frequency layer and the high-frequency layer of the input image can be obtained by the following equations:
LFLi=g(Ii)i=1,2
HFLi=Ii-LFLi i=1,2
wherein g (-) denotes the average filtering operation, IiRepresenting a source image, i.e. an infrared image or a visible light image.
(4) For the initial weight graph W obtained in the step (2)iIf i ═ 1,2 is optimized by using a pilot filtering operation, and the pilot filtering operation is defined as an operator GF (·), the optimized weight map can be expressed as follows:
Figure BDA0002615563840000082
Figure BDA0002615563840000083
wherein r is11(i ═ 1,2) is a parameter of the pilot filter, and is used in the present invention to optimize Wi HLeads the parameters r1 and epsilon of the filter1Set to 15 and 0.15, respectively, for optimizing Wi LLeads the parameters r2 and epsilon of the filter2Set to 7 and 1e-6 respectively,
Figure BDA0002615563840000084
and
Figure BDA0002615563840000085
optimized weight maps for the high frequency layer and the low frequency layer, respectively.
(5) According to the optimized weight map obtained in the step (4)
Figure BDA0002615563840000086
And (4) obtaining the low-frequency layer fused with the infrared and visible light image pair by the low-frequency layer and the high-frequency layer of the input test image obtained in the step (3)
Figure BDA0002615563840000087
And a high frequency layer
Figure BDA0002615563840000088
This process can be represented by the following equation:
Figure BDA0002615563840000089
Figure BDA00026155638400000810
wherein
Figure BDA00026155638400000811
It is shown that the multiplication is at the element level,
Figure BDA00026155638400000812
and
Figure BDA00026155638400000813
respectively representing the fused low frequency layer and the high frequency layer.
(6) Combining the obtained from step (5) according to the formula
Figure BDA0002615563840000091
And
Figure BDA0002615563840000092
a fused image F can be obtained:
Figure BDA0002615563840000093
and (3) comparing the 4 groups of test images with the corresponding infrared and visible light images according to the fused images obtained in the step (6) shown in fig. 4-1, 4-2, 4-3 and 4-4, wherein the fused images have more details than the infrared images and the visible light images, namely, the steps (2) to (5) are the main steps of the fusion method of the invention.
In this example, we will use a set of parameters (bi-directional information MI, edge preserving Q) that are common in the field of subjective vision and objective image processing as a measure of the amount of information involvedGAnd characteristic mutual information FMI and the like) to compare the source image with the fused image, thereby verifying the reliability of the invention.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the technical solutions of the present invention, so long as the technical solutions can be realized on the basis of the above embodiments without creative efforts, which should be considered to fall within the protection scope of the patent of the present invention.

Claims (2)

1. The invention relates to a method for fusing an infrared image and a visible light image, which is characterized by comprising the following steps: the method comprises the following steps:
(1) measuring activity level A of infrared and visible images with a trained learning dictionary DiI is (1,2), thereby obtaining a score map SiAnd i is equal to (1,2), and then the current pixel point is judged according to the score map to obtain an initial weight map Wi,i=(1,2);
(2) Infrared and visible mapping using averaging filtersImage decomposition to obtain low-frequency layer LFL of imagei=g(Ii) I ═ 1,2, and high frequency layer HFLi=g(Ii),i=1,2;
(3) For the initial weight map WiAnd i-1, 2 is optimized by adopting a guide filtering operation to obtain an optimized weight map
Figure FDA0002615563830000011
(4) According to the optimized weight map
Figure FDA0002615563830000012
Low frequency layer fusing infrared and visible light image pairs respectively
Figure FDA0002615563830000013
And a high frequency layer
Figure FDA0002615563830000014
2. The infrared image and visible image fusion method according to claim 1, characterized in that:
in the step (1), the activity level A of the infrared and visible light images is measured by using the trained learning dictionary DiI ═ 1,2), and then, the sparse coefficient was calculated by the following formula
Figure FDA0002615563830000015
L of0Norm to measure activity level:
Figure FDA0002615563830000016
where the coefficients of the ith source image representing the jth patch, n is the total number of patches, A is the calculated activity map, specifying an element reshaping function of f (-), score map SiCan be obtained using the following formula:
Si=f(Ai),i=(1,2)
wherein f (-) is an element reshaping function, and the following formula is utilized to obtain an initial weight graph Wi,i=(1,2):
Figure FDA0002615563830000017
W2(x,y)=1-W1(x,y)
Wherein S is1And S2Respectively representing score maps from the source test sample images;
in the step (2), the low-frequency layer LFL of the image is obtained by decomposing the infrared and visible light images by using an average filteri=g(Ii) I ═ 1,2, and high frequency layer HFLi=g(Ii) If i is 1,2 and the operator for the averaging filter operation is defined as g (·), the low-frequency layer and the high-frequency layer of the input image can be obtained by the following equations:
LFLi=g(Ii)i=1,2
HFLi=Ii-LFLii=1,2
wherein g (-) denotes the average filtering operation, IiRepresenting a source image, i.e. an infrared image or a visible light image;
in the step (3), an initial weight map W is processediIf i ═ 1,2 is optimized by using a pilot filtering operation, and the pilot filtering operation is defined as an operator GF (·), the optimized weight map can be expressed as follows:
Figure FDA0002615563830000021
Figure FDA0002615563830000022
wherein r is11(i-1, 2) are parameters of the pilot filter,
Figure FDA0002615563830000023
and
Figure FDA0002615563830000024
respectively optimizing weight graphs of a high-frequency layer and a low-frequency layer;
in the step (4), according to the optimized weight map
Figure FDA0002615563830000025
Low frequency layer fusing infrared and visible light image pairs respectively
Figure FDA0002615563830000026
And a high frequency layer
Figure FDA0002615563830000027
This process can be represented by the following equation:
Figure FDA0002615563830000028
Figure FDA0002615563830000029
wherein
Figure FDA00026155638300000210
It is shown that the multiplication is at the element level,
Figure FDA00026155638300000211
and
Figure FDA00026155638300000212
respectively representing the fused low frequency layer and the high frequency layer.
CN202010768408.9A 2020-08-03 2020-08-03 Infrared and visible light image fusion method based on sparsity and filter Pending CN114066786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010768408.9A CN114066786A (en) 2020-08-03 2020-08-03 Infrared and visible light image fusion method based on sparsity and filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010768408.9A CN114066786A (en) 2020-08-03 2020-08-03 Infrared and visible light image fusion method based on sparsity and filter

Publications (1)

Publication Number Publication Date
CN114066786A true CN114066786A (en) 2022-02-18

Family

ID=80231647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010768408.9A Pending CN114066786A (en) 2020-08-03 2020-08-03 Infrared and visible light image fusion method based on sparsity and filter

Country Status (1)

Country Link
CN (1) CN114066786A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708178A (en) * 2022-03-30 2022-07-05 北京理工大学 Remote sensing image fusion method based on guided filtering and sparse representation
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN104809734A (en) * 2015-05-11 2015-07-29 中国人民解放军总装备部军械技术研究所 Infrared image and visible image fusion method based on guide filtering
CN106204509A (en) * 2016-07-07 2016-12-07 西安电子科技大学 Based on region characteristic infrared and visible light image fusion method
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107341786A (en) * 2017-06-20 2017-11-10 西北工业大学 The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
CN109447909A (en) * 2018-09-30 2019-03-08 安徽四创电子股份有限公司 The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN109919884A (en) * 2019-01-30 2019-06-21 西北工业大学 Infrared and visible light image fusion method based on gaussian filtering weighting
CN110111290A (en) * 2019-05-07 2019-08-09 电子科技大学 A kind of infrared and visible light image fusion method based on NSCT and structure tensor
CN110175970A (en) * 2019-05-20 2019-08-27 桂林电子科技大学 Based on the infrared and visible light image fusion method for improving FPDE and PCA
CN111462028A (en) * 2020-03-16 2020-07-28 中国地质大学(武汉) Infrared and visible light image fusion method based on phase consistency and target enhancement

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021537A (en) * 2014-06-23 2014-09-03 西北工业大学 Infrared and visible image fusion method based on sparse representation
CN104809734A (en) * 2015-05-11 2015-07-29 中国人民解放军总装备部军械技术研究所 Infrared image and visible image fusion method based on guide filtering
CN106204509A (en) * 2016-07-07 2016-12-07 西安电子科技大学 Based on region characteristic infrared and visible light image fusion method
CN106981058A (en) * 2017-03-29 2017-07-25 武汉大学 A kind of optics based on sparse dictionary and infrared image fusion method and system
CN107341786A (en) * 2017-06-20 2017-11-10 西北工业大学 The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
CN109447909A (en) * 2018-09-30 2019-03-08 安徽四创电子股份有限公司 The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN109919884A (en) * 2019-01-30 2019-06-21 西北工业大学 Infrared and visible light image fusion method based on gaussian filtering weighting
CN110111290A (en) * 2019-05-07 2019-08-09 电子科技大学 A kind of infrared and visible light image fusion method based on NSCT and structure tensor
CN110175970A (en) * 2019-05-20 2019-08-27 桂林电子科技大学 Based on the infrared and visible light image fusion method for improving FPDE and PCA
CN111462028A (en) * 2020-03-16 2020-07-28 中国地质大学(武汉) Infrared and visible light image fusion method based on phase consistency and target enhancement

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JINGYUE CHEN 等: "A Novel Infrared Image Enhancement Based on Correlation Measurement of Visible Image for Urban Traffic Surveillance Systems", 《JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS》 *
QILEI LI 等: "Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter", 《SENSORS》 *
YAN HUANG 等: "Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain", 《SENSORS》 *
李毅: "基于深度学习的图像融合算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王小焙: "基于引导滤波与总变分的图像融合算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王建: "基于改进的稀疏表示和神经网络的图像融合研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708178A (en) * 2022-03-30 2022-07-05 北京理工大学 Remote sensing image fusion method based on guided filtering and sparse representation
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Similar Documents

Publication Publication Date Title
Li et al. Image dehazing using residual-based deep CNN
Zhang et al. A survey of restoration and enhancement for underwater images
Xu et al. Review of video and image defogging algorithms and related studies on image restoration and enhancement
Sahu et al. Different image fusion techniques–a critical review
CN104091314B (en) Turbulence-degraded image blind restoration method based on edge prediction and sparse ratio regular constraints
CN113837974B (en) NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm
Li et al. Multifocus Image Fusion Using Wavelet‐Domain‐Based Deep CNN
CN113673590A (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
Gao et al. Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
Luo et al. Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition
Sahu et al. Trends and prospects of techniques for haze removal from degraded images: A survey
CN114066786A (en) Infrared and visible light image fusion method based on sparsity and filter
Chen et al. The enhancement of catenary image with low visibility based on multi-feature fusion network in railway industry
CN114170103A (en) Electrical equipment infrared image enhancement method
Chang Single underwater image restoration based on adaptive transmission fusion
Fu et al. An anisotropic Gaussian filtering model for image de-hazing
Lei et al. Low-light image enhancement using the cell vibration model
Kaur et al. Automated Multimodal image fusion for brain tumor detection
CN111461999A (en) SAR image speckle suppression method based on super-pixel similarity measurement
CN110880192A (en) Image DCT coefficient distribution fitting method based on probability density function dictionary
CN113781375B (en) Vehicle-mounted vision enhancement method based on multi-exposure fusion
Guan et al. DiffWater: Underwater Image Enhancement Based on Conditional Denoising Diffusion Probabilistic Model
Huang et al. Attention-based for multiscale fusion underwater image enhancement
CN116309221A (en) Method for constructing multispectral image fusion model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220218