CN104463801A - Multi-sensing-information fusion method based on self-adaptation dictionary learning - Google Patents

Multi-sensing-information fusion method based on self-adaptation dictionary learning Download PDF

Info

Publication number
CN104463801A
CN104463801A CN201410742256.XA CN201410742256A CN104463801A CN 104463801 A CN104463801 A CN 104463801A CN 201410742256 A CN201410742256 A CN 201410742256A CN 104463801 A CN104463801 A CN 104463801A
Authority
CN
China
Prior art keywords
self
fusion method
data fusion
sparse
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410742256.XA
Other languages
Chinese (zh)
Inventor
何同弟
赵文忠
张丁喜
王小军
李春阳
苟程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hexi University
Original Assignee
Hexi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hexi University filed Critical Hexi University
Priority to CN201410742256.XA priority Critical patent/CN104463801A/en
Publication of CN104463801A publication Critical patent/CN104463801A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a multi-sensing-information fusion method based on self-adaptation dictionary learning. The multi-sensing-information fusion method comprises the steps that the maximum noise variance of an input image is estimated, the image is divided, dictionary learning is carried out, self-adaptation sparse coding is carried out, data fusion is carried out, image block reconstructing is carried out, and image reconstructing is carried out. The multi-sensing-information fusion method has the multi-scale and multi-direction advantages and the advantage that effective extracted features are represented in a sparse mode at the same time. The caused spectrum distortion is small, a high-quality fusion image is obtained, the scene of the fusion image is clear, the amount of information is large, and observing of the human eyes is better facilitated. A better fusion effect is obtained on the subjective vision effect and an objective evaluation index, and the fusion method is effective and feasible.

Description

A kind of multi-sensor data fusion method based on self-adapting dictionary study
Technical field
The present invention relates to a kind of information fusion method, particularly a kind of multi-sensor data fusion method.
Background technology
The object of multi-sensor data fusion be exactly utilize different sensors data between complementarity and redundancy, the information of different sensors is merged, make the information after fusion while keeping original spectrum characteristic, increase the spatial detail information of image as much as possible, thus the maximum information obtaining target scene describes, this not only increases the quality of remote sensing images, and be more conducive to the subsequent treatment such as classification and target identification of remote sensing images.
Summary of the invention
In order to overcome defect of the prior art, solving the problems of the technologies described above, the invention provides a kind of multi-sensor data fusion method based on self-adapting dictionary study.
Based on a multi-sensor data fusion method for self-adapting dictionary study, step is:
Estimate the maximum noise variance of input picture;
Image block;
Dictionary learning;
Adaptive sparse is encoded;
Data fusion;
Image block reconstructs;
Image reconstruction.
Described image block comprises:
By all images to be fused respectively according to atom size by blocks of pixels;
Block is lined up sample matrix by column vector mode;
Sample set is formed by sample matrix.
Described dictionary learning comprises:
The new training sample of several compositions of sample is got at random from sample set;
Average is gone to new training sample, then obtains self-adapting dictionary through interative computation.
Described sparse coding comprises:
Sample matrix is adopted on self-adapting dictionary ASP algorithm realization Its Sparse Decomposition, obtain sparse coefficient matrix.
Described data fusion comprises:
Certain fusion rule is adopted by sparse coefficient matrix to select characteristic remarkable property coefficient as fusion coefficients.
Described image block reconstruct comprises:
Sparse for the fusion convolution with crossing complete dictionary is realized block reconstruct, and obtain reconstructed image block vector.
Described Image Reconstruction comprises:
By reconstructed image block vector, revert to image block data, add average and rearrange according to order during piecemeal;
Overlapping block is averaged and realizes image reconstruction, obtain last fused images.
Beneficial effect of the present invention:
The present invention has the characteristic that multiple dimensioned, multidirectional feature and rarefaction representation effectively extract feature simultaneously, and the spectrum distortion caused is less, obtains high-quality fused images, and fused images scene is clear, contain much information, and is more conducive to the observation of human eye.Subjective vision effect and objective evaluation index all achieve more excellent syncretizing effect, are a kind of effective and feasible fusion methods.
Accompanying drawing explanation
Fig. 1 is the image co-registration process schematic represented based on adaptive sparse.
Embodiment
Hereafter will describe embodiments of the invention in detail by reference to the accompanying drawings.It should be noted that the combination of technical characteristic or the technical characteristic described in following embodiment should not be considered to isolated, they can mutually be combined thus be reached better technique effect.In the accompanying drawing of following embodiment, the identical label that accompanying drawing occurs represents identical feature or parts, can be applicable in different embodiment.
As shown in Figure 1, the interative computation process crossing complete rarefaction representation can obtain the complete dictionary of mistake and the sparse coefficient of training sample simultaneously.For image co-registration task, the selection of sparse coefficient determines last syncretizing effect.With the two width remote sensing images that registration is good, its concrete fusion process is as follows:
The maximum noise variances sigma of step one, estimation input picture 2, make T=λ σ 2, wherein T is sparse threshold value, and λ is coefficient.
If step 2, image block atom size are set as M, M=n × n, so, need block image a, b to be fused being divided into respectively P1 and P2 n × n size according to atom size by pixel, if two image sizes to be fused are identical, then P1=P2=P, block is lined up sample matrix Y1 and Y2 by column vector mode, and form sample set F by Y1 and Y2, F=[Y1, Y2].
Step 3, dictionary learning get the new training sample Y of P composition of sample at random from F, and the size of Y is M × N, first goes average to Y, and the step then according to Section 1 obtains self-adapting dictionary D through interative computation.
Y1 and Y2 is adopted ASP algorithm realization Its Sparse Decomposition by step 4, sparse coding on dictionary D, obtains sparse coefficient matrix α 1and α 2, the corresponding image block of its each row.
Step 5, data fusion are by α 1and α 2certain fusion rule is adopted to select characteristic remarkable property coefficient as fusion coefficients α.Here fusion rule choose comparatively crucial, this fusion rule is chosen based on the self-adaptation of the weighting coefficient of region energy.First the local energy E of position centered by point (m, n) is obtained jA(m, n) and E jB(m, n), after can to calculate the rarefaction representation coefficient of fused images according to following formula:
α = E JA ( m , n ) E JA ( m , n ) + E JB ( m , n ) ∂ 1 + E JB ( m , n ) E JA ( m , n ) + E JB ( m , n ) α 2
Because center pixel that energy of local area is larger represents the obvious characteristic of original image, if a certain region energy of being tried to achieve image A by above-mentioned formula is larger, then corresponding weighting coefficient also can be larger, if region energy is less, then corresponding weighting coefficient also can be less, meets the feature that the feature of original image own affects weighting coefficient.So this adaptive fusion rule, be effective, feasible.
Step 6, image block reconstruct fusion coefficients realizes block with the convolution crossing complete dictionary and reconstructs, namely for the reconstructed image block after merging is vectorial.
Step 7, image reconstruction are by y irevert to image block data, add average and rearrange according to order during piecemeal, overlapping block is averaged and realizes image reconstruction, obtain last fused images Y.
The present invention has the characteristic that multiple dimensioned, multidirectional feature and rarefaction representation effectively extract feature simultaneously, and the spectrum distortion caused is less, obtains high-quality fused images, and fused images scene is clear, contain much information, and is more conducive to the observation of human eye.Subjective vision effect and objective evaluation index all achieve more excellent syncretizing effect, are a kind of effective and feasible fusion methods.
Although give some embodiments of the present invention, it will be understood by those of skill in the art that without departing from the spirit of the invention herein, can change embodiment herein.Above-described embodiment is exemplary, should using embodiment herein as the restriction of interest field of the present invention.

Claims (7)

1., based on a multi-sensor data fusion method for self-adapting dictionary study, it is characterized in that, step is:
Estimate the maximum noise variance of input picture;
Image block;
Dictionary learning;
Adaptive sparse is encoded;
Data fusion;
Image block reconstructs;
Image reconstruction.
2. as claimed in claim 1 a kind of based on self-adapting dictionary study multi-sensor data fusion method, it is characterized in that, described image block comprises:
By all images to be fused respectively according to atom size by blocks of pixels;
Block is lined up sample matrix by column vector mode;
Sample set is formed by sample matrix.
3. as claimed in claim 1 a kind of based on self-adapting dictionary study multi-sensor data fusion method, it is characterized in that, described dictionary learning comprises:
The new training sample of several compositions of sample is got at random from sample set;
Average is gone to new training sample, then obtains self-adapting dictionary through interative computation.
4. as claimed in claim 1 a kind of based on self-adapting dictionary study multi-sensor data fusion method, it is characterized in that, described sparse coding comprises:
Sample matrix is adopted on self-adapting dictionary ASP algorithm realization Its Sparse Decomposition, obtain sparse coefficient matrix.
5. as claimed in claim 1 a kind of based on self-adapting dictionary study multi-sensor data fusion method, it is characterized in that, described data fusion comprises:
Certain fusion rule is adopted by sparse coefficient matrix to select characteristic remarkable property coefficient as fusion coefficients.
6. a kind of multi-sensor data fusion method based on self-adapting dictionary study as claimed in claim 1, it is characterized in that, the reconstruct of described image block comprises:
Sparse for the fusion convolution with crossing complete dictionary is realized block reconstruct, and obtain reconstructed image block vector.
7. as claimed in claim 1 a kind of based on self-adapting dictionary study multi-sensor data fusion method, it is characterized in that, described Image Reconstruction comprises:
By reconstructed image block vector, revert to image block data, add average and rearrange according to order during piecemeal;
Overlapping block is averaged and realizes image reconstruction, obtain last fused images.
CN201410742256.XA 2014-12-04 2014-12-04 Multi-sensing-information fusion method based on self-adaptation dictionary learning Pending CN104463801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410742256.XA CN104463801A (en) 2014-12-04 2014-12-04 Multi-sensing-information fusion method based on self-adaptation dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410742256.XA CN104463801A (en) 2014-12-04 2014-12-04 Multi-sensing-information fusion method based on self-adaptation dictionary learning

Publications (1)

Publication Number Publication Date
CN104463801A true CN104463801A (en) 2015-03-25

Family

ID=52909789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410742256.XA Pending CN104463801A (en) 2014-12-04 2014-12-04 Multi-sensing-information fusion method based on self-adaptation dictionary learning

Country Status (1)

Country Link
CN (1) CN104463801A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169925A (en) * 2017-04-21 2017-09-15 西安电子科技大学 The method for reconstructing of stepless zooming super-resolution image
CN112634468A (en) * 2021-03-05 2021-04-09 南京魔鱼互动智能科技有限公司 Virtual scene and real scene video fusion algorithm based on SpPccs

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208102A (en) * 2013-03-29 2013-07-17 上海交通大学 Remote sensing image fusion method based on sparse representation
US20140072209A1 (en) * 2012-09-13 2014-03-13 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140072209A1 (en) * 2012-09-13 2014-03-13 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN103208102A (en) * 2013-03-29 2013-07-17 上海交通大学 Remote sensing image fusion method based on sparse representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
严春满 等: "自适应字典学习的多聚焦图像融合", 《中国图象图形学报》 *
王珺 等: "基于多尺度字典学习的图像融合方法", 《西北工业大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169925A (en) * 2017-04-21 2017-09-15 西安电子科技大学 The method for reconstructing of stepless zooming super-resolution image
CN107169925B (en) * 2017-04-21 2019-10-22 西安电子科技大学 The method for reconstructing of stepless zooming super-resolution image
CN112634468A (en) * 2021-03-05 2021-04-09 南京魔鱼互动智能科技有限公司 Virtual scene and real scene video fusion algorithm based on SpPccs
CN112634468B (en) * 2021-03-05 2021-05-18 南京魔鱼互动智能科技有限公司 Virtual scene and real scene video fusion algorithm based on SpPccs

Similar Documents

Publication Publication Date Title
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
CN110175986B (en) Stereo image visual saliency detection method based on convolutional neural network
CN103475876B (en) A kind of low bit rate compression image super-resolution rebuilding method based on study
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN105551010A (en) Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN107730482B (en) Sparse fusion method based on regional energy and variance
CN105761223A (en) Iterative noise reduction method based on image low-rank performance
CN105761234A (en) Structure sparse representation-based remote sensing image fusion method
CN110533623B (en) Full convolution neural network multi-focus image fusion method based on supervised learning
CN110189286B (en) Infrared and visible light image fusion method based on ResNet
CN103455991A (en) Multi-focus image fusion method
CN103049760B (en) Based on the rarefaction representation target identification method of image block and position weighting
CN105005798B (en) One kind is based on the similar matched target identification method of structures statistics in part
CN104657951A (en) Multiplicative noise removal method for image
CN104036481B (en) Multi-focus image fusion method based on depth information extraction
Beaulieu et al. Deep image-to-image transfer applied to resolution enhancement of sentinel-2 images
CN103945217A (en) Complex wavelet domain semi-blind image quality evaluation method and system based on entropies
WO2012079587A3 (en) Method and device for parallel processing of images
CN104881845A (en) Method And Apparatus For Processing Image
CN104021523A (en) Novel method for image super-resolution amplification based on edge classification
Zhang et al. WGGAN: A wavelet-guided generative adversarial network for thermal image translation
CN116152061A (en) Super-resolution reconstruction method based on fuzzy core estimation
CN104408697A (en) Image super-resolution reconstruction method based on genetic algorithm and regular prior model
Salem et al. Semantic image inpainting using self-learning encoder-decoder and adversarial loss
CN103985104A (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150325

RJ01 Rejection of invention patent application after publication