CN111080566A - Visible light and infrared image fusion method based on structural group double-sparse learning - Google Patents

Visible light and infrared image fusion method based on structural group double-sparse learning Download PDF

Info

Publication number
CN111080566A
CN111080566A CN201911270444.6A CN201911270444A CN111080566A CN 111080566 A CN111080566 A CN 111080566A CN 201911270444 A CN201911270444 A CN 201911270444A CN 111080566 A CN111080566 A CN 111080566A
Authority
CN
China
Prior art keywords
image
sparse
dictionary
visible light
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911270444.6A
Other languages
Chinese (zh)
Inventor
王志社
姜晓林
王君尧
武圆圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN201911270444.6A priority Critical patent/CN111080566A/en
Publication of CN111080566A publication Critical patent/CN111080566A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a visible light and infrared image fusion method based on structural group double-sparse learning. The method comprises the following steps: (1) performing sliding window processing on the input visible light and infrared images, searching similar blocks of an original image block, performing group vectorization, and establishing an image similar structure group matrix; (2) taking the image similar structure group matrix as a training sample, forming a base dictionary by using a Kronecker product of shear wavelets, obtaining a sparse dictionary through online learning, and performing linear reconstruction on the base dictionary and the sparse dictionary to obtain a final double-sparse dictionary; (3) and combining the double sparse dictionaries, performing group sparse solution on the image similar structure group by using SOMP to obtain a group sparse coefficient, and obtaining a final fusion image by using a large fusion rule and image reconstruction. The method solves the problem that the image fusion quality is low due to the fact that the existing sparse fusion algorithm ignores the correlation among image blocks and the dictionary adaptability is poor, and can be applied to the fields of remote sensing detection, medical diagnosis, intelligent driving, safety monitoring and the like.

Description

Visible light and infrared image fusion method based on structural group double-sparse learning
Technical Field
The invention relates to an image fusion method in the field of image processing, in particular to a visible light and infrared image fusion method based on structural group double sparse learning.
Background
Visible light and infrared imaging technologies have important applications in the aspects of remote sensing, medical diagnosis, intelligent driving, safety monitoring and the like. The visible light sensor can describe scene information of the environment through light reflection imaging, has higher spatial resolution, and is easily influenced by illumination conditions and weather change factors; the infrared sensor can reflect the radiation characteristics of the target and the background through thermal radiation imaging, but structural characteristics and texture information of the target are lacked. The two types of imaging utilize different physical characteristics of the target to detect, have strong complementarity, only fuse the two types of images, just can synthesize different imaging advantages, reduce information loss, do benefit to target identification processing and personnel and observe, satisfy practical demand. Therefore, image fusion is an important prerequisite for improving the detection and recognition levels of visible light and infrared imaging.
The key technology for realizing the fusion of the visible light image and the infrared image is to integrate the significant characteristics between the two images into one image so as to exert the integrated advantages of the visible light image and the infrared image. The multi-scale transformation fusion method is to approximate the salient feature information of the image by using a given mathematical model, but because the salient feature types of the image are complicated and changeable, the multi-scale transformation cannot extract the salient features of all types of the image. In order to improve the multi-scale transformation fusion effect, a redundant dictionary is constructed through online learning from the perspective of image signal sparsity, image signals are represented on the redundant dictionary through sparse representation, and the significant features of the images are described by using representation coefficients and atoms corresponding to the redundant dictionary. At present, the traditional sparse representation fusion model has two problems: firstly, in the process of dictionary learning and sparse coding, each image block is considered independently, and the correlation among the blocks is ignored, so that the sparse coding coefficient is inaccurate; secondly, the respective advantages of the analysis dictionary and the learning dictionary are not combined, and the adaptability of the dictionary is not strong.
Research shows that non-local similarity is an important characteristic of an image, and illustrates that many similar structures (such as detail and texture information) exist at different positions in the image, and the information contained in the similar structures is applied to image processing, so that the effect of the image processing can be improved, and the method is used in the fields of image denoising, compressed sensing, super-resolution and the like. In fact, visible light images have rich repetitive structures and a large amount of redundant information, and image blocks reflect local geometric structures and repeatedly appear at different positions of the images, and have non-local structural similarity. In the infrared image, most of the infrared image is a background area, the gray scale change of the infrared image is slow, and small background image blocks have strong correlation and obvious non-local structural similarity. Therefore, through the non-local similarity of the images, the structural similarity groups of the visible images and the infrared images are established, the correlation among the image blocks is further established, and the accuracy of sparse coding is improved.
At present, sparse representation dictionaries can be divided into two categories, parsing dictionaries and learning dictionaries. The analytic dictionary establishes a formulaic mathematical model for the data, so the analytic dictionary is highly structured, can realize fast numerical value, but has poor adaptability; the learning dictionary is used for learning through training samples, and is more adaptive, but the learning model is complex. Researches show that organic combination of an analysis dictionary and a training dictionary is realized, and the advantages of the two dictionaries are combined, so that the adaptability of the dictionary is improved, and the complexity of a model is reduced, which is an urgent need for sparse representation fusion development.
In summary, there is an urgent need for an image fusion method that can effectively establish the correlation of image blocks, improve the accuracy of sparse coding, enhance the applicability of redundant dictionaries, reduce the complexity of dictionary learning models, and further effectively improve the fusion effect of visible light and infrared images.
Disclosure of Invention
The invention provides a visible light and infrared image fusion method based on structural group double-sparse learning, and aims to solve the problem that the existing sparse representation fusion algorithm ignores the correlation among blocks and is poor in dictionary learning adaptability, so that the fusion quality of visible light and infrared images is poor.
The invention is realized by adopting the following technical scheme: a visible light and infrared image fusion method based on structural group double sparse learning comprises the following steps:
s1: performing sliding window processing on the input visible light and infrared images, searching for similar image blocks of the original image blocks, performing group vectorization on the original image blocks and the similar image blocks, and establishing an image similar structure group matrix;
s2: constructing a double-sparse learning model, forming a base dictionary by using a Kronecker product of a shear wavelet, obtaining a sparse dictionary through online learning, and performing linear reconstruction on the base dictionary and the sparse dictionary to obtain a final double-sparse dictionary;
s3: and combining the double sparse dictionaries, carrying out sparse solution on the image similar structure group by adopting the SOMP to obtain a group sparse coefficient, and obtaining a final fusion image by adopting a large fusion rule and image reconstruction.
The above-mentioned structure group matrix construction process of the visible light and infrared image is as follows: the method comprises the steps of carrying out image blocking on an input image through a sliding window, using Euclidean distance as a criterion, searching for similar image blocks of an original image block, enabling the original image block and the similar image blocks to form similar structure groups, arranging the image blocks in each structure similar group according to a column vector sequence, connecting the image block vectors end to end, and obtaining similar structure group matrixes of the visible light and the infrared image respectively.
The process for constructing the double sparse dictionaries facing the similar structure group matrix comprises the following steps: the method comprises the steps of taking a similar structure group matrix of visible light images and infrared images as training samples, obtaining a base dictionary by using a Kronecker product of shear wavelets, obtaining a learning dictionary by using a sparse learning model and adopting a sequential updating iterative learning method, and finally, carrying out linear reconstruction on the base dictionary and the learning dictionary to obtain a final double-sparse dictionary.
Compared with the existing sparse representation fusion technology, the method has the following advantages:
1. according to the method, the non-local similarity of the images is utilized, the image similar structure group matrix is constructed, the relation between the image blocks is established, the capability of capturing the remarkable features of the images by the dictionary atoms can be effectively enhanced, and the accuracy of dictionary learning sparse coding is improved.
2. The method comprises the steps of forming a base dictionary by a Kronecker product of shear wavelets, obtaining a sparse dictionary through online learning, and performing linear reconstruction on the base dictionary and the sparse dictionary to obtain a double-sparse dictionary; the double sparse dictionaries are combined with respective advantages of the analytic dictionary and the learning dictionary, complexity of a dictionary learning model is reduced, and applicability of the enhanced redundant dictionary is enhanced.
3. The image fusion method for image structure group matrix double sparse learning is established, the fusion effect is obvious, the method can also be applied to the fusion of multi-mode images, multi-focus images and medical images, and the method has high application value in the field of image fusion.
Drawings
FIG. 1 is a schematic diagram of the structure of the method of the present invention.
Fig. 2 shows a first set of visible light and infrared image fusion experiments, which sequentially include a visible light image, an infrared image and a fusion image from left to right.
Fig. 3 shows a second set of visible light and infrared image fusion experiments, which sequentially include a visible light image, an infrared image and a fusion image from left to right.
Fig. 4 shows a third set of visible light and infrared image fusion experiment, which sequentially includes a visible light image, an infrared image and a fusion image from left to right.
Fig. 5 shows a fourth set of the visible light and infrared image fusion experiment, which sequentially includes a visible light image, an infrared image and a fusion image from left to right.
Detailed Description
A visible light and infrared image fusion method based on structural group double sparse learning comprises the following steps:
s1: performing sliding window processing on the input visible light and infrared images, searching for similar image blocks of the original image blocks, performing group vectorization on the original image blocks and the similar image blocks, and establishing an image similar structure group matrix;
s11: the method comprises the steps of adopting a sliding window technology, wherein the size of a sliding window is N multiplied by N, the sliding step length is 1 pixel, and dividing a visible light image V and an infrared image I with the size of M multiplied by N into (M-N +1) · (N-N +1) image blocks.
S12: for each original image block piIn L × L neighborhood, the Euclidean distance is used as the measurement criterion to calculate the sum piS most similar image blocks, and the original image block piForming a similarity group g with s similar image blocksiThere are s +1 image blocks in each similarity group. For visible and infrared images, (M-N +1) · (N-N +1) groups of similar structures, respectively, are obtained, respectively
Figure BDA0002314009020000031
And
Figure BDA0002314009020000032
s13: similar structure group for visible light and infrared image
Figure BDA0002314009020000033
And
Figure BDA0002314009020000034
firstly, arranging image blocks according to the sequence of column vectors to obtain image block vectors
Figure BDA0002314009020000035
Then the s +1 image block vectors in the similar structure group are connected end to obtain a similar structure group vector with higher dimensionality
Figure BDA0002314009020000041
S14: combining each similar structure group vector as a matrix column to respectively obtain a similar structure group matrix of the visible light and the infrared image
Figure BDA0002314009020000042
S2: constructing a double-sparse learning model, forming a base dictionary by using a Kronecker product of a shear wavelet, obtaining a sparse dictionary through online learning, and performing linear reconstruction on the base dictionary and the sparse dictionary to obtain a final double-sparse dictionary;
s21: for two-dimensional separable shear wavelets, the Kronecker product of the shear wavelets is used to form a base dictionary
Figure BDA0002314009020000043
Let A be an element of Rw×mFor sparse learning dictionaries, X ∈ Rm×(M-n+1)·(N-n+1)For sparse coefficient matrix, similar structure of visible light and infrared image is grouped into matrix
Figure BDA0002314009020000044
As a training sample, the dual sparse online learning model can then be represented as:
Figure BDA0002314009020000045
wherein x isiIs an arbitrary row of the sparse coefficient matrix X, ajIs an arbitrary column of the sparse dictionary A, | | | | | non-woven phosphor0Is L0Norm, solving the number of nonzero elements in the vector (namely sparsity); and p and k respectively control the sparsity of the sparse coefficient X and the sparse dictionary A.
S22: by using a pair dictionary atom ajThe learning method of sequential update obtains the sparse learning dictionary a through iteration, and the learning process can be expressed as:
Figure BDA0002314009020000046
wherein E isjIs composed of
Figure BDA0002314009020000047
The error of (2).
S23: and multiplying the base dictionary phi and the sparse learning dictionary A, and performing linear reconstruction to obtain a final double-sparse dictionary D.
S3: combining double sparse dictionaries, carrying out sparse solution on the image similar structure group matrix by using SOMP to obtain a group sparse coefficient, and obtaining a final fusion image by using a large fusion rule and image reconstruction;
s31: similar structure group matrix of visible light and infrared images by using SOMP algorithm in combination with double sparse dictionaries D
Figure BDA0002314009020000048
Performing sparse decomposition to respectively obtainGroup sparsity coefficients for visible and infrared images
Figure BDA0002314009020000049
And
Figure BDA00023140090200000410
s32: the group sparse coefficient combination adopts a fusion rule with the maximum absolute value, and can be expressed as follows:
Figure BDA00023140090200000411
wherein the content of the first and second substances,
Figure BDA00023140090200000412
is composed of
Figure BDA00023140090200000413
T is 1,2, …, m. Final fused set of similar structures vector pass
Figure BDA00023140090200000414
Thus obtaining the product.
S33: for each obtained fusion similarity structure group vector
Figure BDA0002314009020000051
It is equally divided into s +1 subvectors. And reconstructing each sub vector into an image block with the size of n multiplied by n, placing the image block at a position corresponding to the reconstructed fusion image, and averaging the value at each position of the reconstructed fusion image according to the pixel superposition times to obtain the final fusion image.

Claims (3)

1. A visible light and infrared image fusion method based on structural group double sparse learning is characterized by comprising the following steps:
s1: performing sliding window processing on the input visible light and infrared images, searching for similar image blocks of the original image blocks, performing group vectorization on the original image blocks and the similar image blocks, and establishing an image similar structure group matrix;
s2: constructing a double-sparse learning model, forming a base dictionary by using a Kronecker product of a shear wavelet, obtaining a sparse dictionary through online learning, and performing linear reconstruction on the base dictionary and the sparse dictionary to obtain a final double-sparse dictionary;
s3: and combining the double sparse dictionaries, carrying out sparse solution on the image similar structure group by adopting the SOMP to obtain a group sparse coefficient, and obtaining a final fusion image by adopting a large fusion rule and image reconstruction.
2. The visible light and infrared image fusion method based on structure group double sparse learning of claim 1, characterized in that the similar structure group matrix construction process of the visible light and infrared image is as follows: and carrying out image blocking on the input image through a sliding window, searching for similar image blocks of the original image block by adopting Euclidean distance as a criterion, forming similar structure groups by the original image block and the similar image blocks, arranging the image blocks in each structure similar group according to a column vector sequence, and connecting the image block vectors end to obtain a similar structure group matrix of the visible light and the infrared image.
3. The visible light and infrared image fusion method based on structure group double sparse learning according to claim 1 or 2, characterized in that the process of constructing the double sparse dictionary facing to the similar structure group matrix is as follows: the method comprises the steps of taking a similar structure group matrix of visible light images and infrared images as training samples, obtaining a base dictionary by using a Kronecker product of shear wavelets, obtaining a learning dictionary by using a sparse learning model and adopting a sequential updating iterative learning method, and finally, carrying out linear reconstruction on the base dictionary and the learning dictionary to obtain a final double-sparse dictionary.
CN201911270444.6A 2019-12-12 2019-12-12 Visible light and infrared image fusion method based on structural group double-sparse learning Pending CN111080566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911270444.6A CN111080566A (en) 2019-12-12 2019-12-12 Visible light and infrared image fusion method based on structural group double-sparse learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911270444.6A CN111080566A (en) 2019-12-12 2019-12-12 Visible light and infrared image fusion method based on structural group double-sparse learning

Publications (1)

Publication Number Publication Date
CN111080566A true CN111080566A (en) 2020-04-28

Family

ID=70313890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911270444.6A Pending CN111080566A (en) 2019-12-12 2019-12-12 Visible light and infrared image fusion method based on structural group double-sparse learning

Country Status (1)

Country Link
CN (1) CN111080566A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111815732A (en) * 2020-07-24 2020-10-23 西北工业大学 Method for coloring intermediate infrared image
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200451A (en) * 2014-08-28 2014-12-10 西北工业大学 Image fusion method based on non-local sparse K-SVD algorithm
CN105761234A (en) * 2016-01-28 2016-07-13 华南农业大学 Structure sparse representation-based remote sensing image fusion method
CN106251320A (en) * 2016-08-15 2016-12-21 西北大学 Remote sensing image fusion method based on joint sparse Yu structure dictionary
CN110097501A (en) * 2019-04-12 2019-08-06 武汉大学 A kind of NDVI image interfusion method based on the sparse regularization of non-local mean gradient

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200451A (en) * 2014-08-28 2014-12-10 西北工业大学 Image fusion method based on non-local sparse K-SVD algorithm
CN105761234A (en) * 2016-01-28 2016-07-13 华南农业大学 Structure sparse representation-based remote sensing image fusion method
CN106251320A (en) * 2016-08-15 2016-12-21 西北大学 Remote sensing image fusion method based on joint sparse Yu structure dictionary
CN110097501A (en) * 2019-04-12 2019-08-06 武汉大学 A kind of NDVI image interfusion method based on the sparse regularization of non-local mean gradient

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RUBINSTEIN R等: ""Double Sparsity:Learning Sparse Dictionaries for Sparse Signal Approximation"", 《IEEE TRANSAXTIONS ON SIGNAL PROCESSING》 *
张晓等: ""基于结构组稀疏表示的遥感图像融合"", 《中国图象图形学报》 *
高珊等: ""基于双稀疏字典的医学图像融合算法及在脑血管疾病诊断中的应用"", 《北京生物医学工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111652832B (en) * 2020-07-09 2023-05-12 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111815732A (en) * 2020-07-24 2020-10-23 西北工业大学 Method for coloring intermediate infrared image
CN111815732B (en) * 2020-07-24 2022-04-01 西北工业大学 Method for coloring intermediate infrared image
CN117218048A (en) * 2023-11-07 2023-12-12 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model

Similar Documents

Publication Publication Date Title
CN109376804B (en) Hyperspectral remote sensing image classification method based on attention mechanism and convolutional neural network
Zhang et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN107657217B (en) Infrared and visible light video fusion method based on moving target detection
Shekhar et al. Analysis sparse coding models for image-based classification
CN111080566A (en) Visible light and infrared image fusion method based on structural group double-sparse learning
Ji et al. Nonlocal tensor completion for multitemporal remotely sensed images’ inpainting
Yang et al. Multitask dictionary learning and sparse representation based single-image super-resolution reconstruction
Xu et al. Hyperspectral computational imaging via collaborative Tucker3 tensor decomposition
Marivani et al. Multimodal deep unfolding for guided image super-resolution
CN107301630B (en) CS-MRI image reconstruction method based on ordering structure group non-convex constraint
WO2017110836A1 (en) Method and system for fusing sensed measurements
Zou et al. Robust compressive sensing of multichannel EEG signals in the presence of impulsive noise
Aghamaleki et al. Image fusion using dual tree discrete wavelet transform and weights optimization
CN108257093A (en) The single-frame images ultra-resolution method returned based on controllable core and Gaussian process
CN104820967B (en) In-orbit calculating imaging method
CN114693577B (en) Infrared polarized image fusion method based on Transformer
Heiser et al. Compressive hyperspectral image reconstruction with deep neural networks
Wu et al. A distributed fusion framework of multispectral and panchromatic images based on residual network
CN108596866B (en) Medical image fusion method based on combination of sparse low-rank decomposition and visual saliency
Liu et al. GJTD-LR: A trainable grouped joint tensor dictionary with low-rank prior for single hyperspectral image super-resolution
Ye et al. Cross-scene hyperspectral image classification based on DWT and manifold-constrained subspace learning
CN116071226B (en) Electronic microscope image registration system and method based on attention network
Niresi et al. Robust hyperspectral inpainting via low-rank regularized untrained convolutional neural network
CN110689510B (en) Sparse representation-based image fusion method introducing dictionary information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200428