CN104616261A - Method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics - Google Patents

Method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics Download PDF

Info

Publication number
CN104616261A
CN104616261A CN201510066167.2A CN201510066167A CN104616261A CN 104616261 A CN104616261 A CN 104616261A CN 201510066167 A CN201510066167 A CN 201510066167A CN 104616261 A CN104616261 A CN 104616261A
Authority
CN
China
Prior art keywords
fusion
pan
image
csf
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510066167.2A
Other languages
Chinese (zh)
Other versions
CN104616261B (en
Inventor
张宝华
吕晓琪
裴海全
刘鹤
周文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Science and Technology
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN201510066167.2A priority Critical patent/CN104616261B/en
Publication of CN104616261A publication Critical patent/CN104616261A/en
Application granted granted Critical
Publication of CN104616261B publication Critical patent/CN104616261B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics. The method comprises the steps of (1) performing IHS transformation for a multi-spectral image; (2) performing Shearlet transformation and decomposition for the BIMFs and the full-color image to obtain a low-frequency sub-band and a plurality of high-frequency sub-band; (3) building a fusion rule and selecting fusion coefficients, namely, decomposing the low-frequency coefficients through laplacian pyramid, selecting the low-frequency fusion coefficients by the comparing and bigger selection method, and fusing the corresponding high-frequency coefficients according to the fusion rule based on PCNN (Pulse Coupled Neural Network); (4) performing IHS reverse transformation to obtain the final fused image. With the adoption of the fusion algorithm, the spectrum information of the images is protected while the space details expression capacity is improved, and thus the fusion effect is improved.

Description

Multispectral and the panchromatic image fusion method in a kind of Shearlet territory based on spectral characteristic
Technical field
The present invention relates to a kind of method of technical field of image processing, be specifically related to the multispectral and panchromatic image fusion method in a kind of Shearlet territory based on spectral characteristic.
Background technology
Remote sensing image cannot possess high spatial resolution and high spectral resolution simultaneously, needs to utilize spectral resolution to improve spatial resolution.Remote sensing image fusion by certain regular calculation process, obtains the multi-source data of space or time upper redundancy than any single data more accurately, abundanter information, generates the composograph that a width has new space, wave spectrum, temporal characteristics.
Full-colour image has high spatial resolution, the minutia of expressive atural object, but spectral resolution is not enough; Multispectral image spectral information enriches, and be beneficial to identification ground object target profile, but spatial resolution is low.Multispectral fusion with full-colour image both can utilize the high resolving power of full-colour image to improve the resolution of multispectral image, can make full use of again the distinctive meticulous expression to target in multispectral image, obtain abundanter target information.
Improvement component replacement method and multiresolution analysis are the two kinds of methods solving spectrum problem of dtmf distortion DTMF, by improving conventional IHS transformation for mula, substituted for matrix multiplication operation with addition of matrices computing, range of fusion is expanded to the fusion more than three wave bands, to reduce spectrum degreeof tortuosity; Part research, by adding regulating parameter, in order to control the injection degree of spatial detail, makes fusion results between standard I HS conversion and BROVERY convert.
In order to obtain better syncretizing effect, increasing research gets up to use by component method of substitution with based on multiresolution analysis methods combining; The fusion method of IHS conversion combined with wavelet transformed is representative wherein, first to the luminance component pre-service of Pan image, MS image, then wavelet decomposition is carried out, the high frequency coefficient of luminance component is replaced with the high frequency coefficient of full-colour image, retain the low frequency component of MS image, fusion treatment obtains new luminance component, and the spectral information carried due to it is protected, and decreases spectrum distortion to a certain extent; Similar utilizes IHS conversion in conjunction with the fusion method of contourlet transformation, first the luminance component of MS image and PAN figure is done Histogram Matching, uses multiresolution analysis that the detailed information of PAN image is injected into luminance component, obtain high-resolution principal component; Only carry out between luminance component and PAN image owing to merging, part decreases the spectrum problem of dtmf distortion DTMF of fused images; Although said method decreases the spectrum distortion of fused images, but because it all comprises down-sampling and up-sampling operation, produce Pseudo-Gibbs artifacts, have lost the partial spatial information of Pan image, the spatial resolution of fusion evaluation is often not as good as traditional IHS fusion method, fusion results is responsive to the selection and comparison of wavelet basis function Sum decomposition level simultaneously, and Selecting parameter is comparatively complicated.
In order to overcome above-mentioned shortcoming, BEMD can be utilized to replace above-mentioned multiresolution analysis method, reason is that BEMD has than the more intuitive character representation mode of wavelet analysis and frequency characteristic more flexibly, image is decomposed and does not need in conversion, select basis function but can obtain the multi-resolution representation of similar wavelet transformation, and do not introduce redundant information in goal decomposition.Decompose each intersubband of BIMFs obtained and there is correlativity, be similar to the correlativity between wavelet field different scale, simultaneously because decomposable process is without down-sampling, avoid the Pseudo-Gibbs artifacts that can cause in wavelet transformation.BEMD has the ability of good Image Edge-Detection and texture recognition, is beneficial to the extraction of image detail protection and image texture.The each subband of BIMFs contains the major side profile of original image, i.e. spatial structural form, and surplus then contains the tendency information of image, corresponding to the low frequency component of remote sensing images, carries its spectral information.So utilize BEMD to realize multi-resolution analysis, spectral information can be protected better.
Summary of the invention
Technical matters to be solved by this invention is: provide a kind of Shearlet territory based on spectral characteristic multispectral and panchromatic image fusion method, the visual characteristic of human eye under color background has been taken into account while improving spatial resolution, retain the spectral information in the former luminance component of multispectral image as far as possible, decrease spectrum distortion.
Technical scheme of the present invention is: the multispectral and panchromatic image fusion method in a kind of Shearlet territory based on spectral characteristic, comprises the following steps:
(1) IHS conversion is carried out to multispectral image, obtain luminance component P i, chromatic component P s, saturation degree component P h, by the luminance component P of multispectral image iobtain accumulateing modular function BIMF in n-1 two dimension that frequency arranges from high to low by two-dimensional empirical mode decomposition BEMD t, t=1,2 ... n-1, n>3 and surplus res;
(2) modular function BIMF is accumulate in calculating n-1 two dimension tenergy and sort, first three component sum B of statistics energy ordering mS, by B mSwith full-colour image I pANdecompose respectively by Shearlet conversion, Decomposition order is three layers, obtains a respective low frequency sub-band: with three layers of high-frequency sub-band: m=1,2,3, accumulate modular function BIMF in two dimension tthe computing formula of energy is such as formula shown in (1):
E ( BIMF t ) = ( Σ x = 1 i Σ y = 1 j D ( x , y ) 2 t ) - - - ( 1 )
Wherein, D (x, y) tfor BIMF tcoefficient, i, j represent BIMF respectively tline number and columns;
(3) set up fusion rule and select fusion coefficients: utilize Laplacian pyramid low frequency coefficient, obtain corresponding coefficient of dissociation L mS, ijand L pAN, ij, the large method of relatively getting shown in through type (2) selects low frequency fusion coefficients L f, ij:
L F , ij = L MS , ij , if ( L MS , ij &GreaterEqual; L PAN , ij ) L PAN , ij , if ( L MS , ij < L PAN , ij ) - - - ( 2 )
Corresponding high frequency coefficient is merged by the fusion rule formulated based on binary channels Pulse Coupled Neural Network, specific as follows:
1. will input two input ends of binary channels Pulse Coupled Neural Network respectively, utilize the external drive of CSF factor CSF as binary channels Pulse Coupled Neural Network of formula (3) design factor;
CSF(f)=2.6×(0.0192+0.114f)exp[-(0.114f) 1.1] (3)
Wherein, it is spatial frequency; f x, f ythe spatial frequency of level and vertical direction respectively,
f x = 1 MN &Sigma; i = 0 M - 1 &Sigma; j = 1 N - 1 [ f ( i , j ) - f ( i , j - 1 ) ] 2 - - - ( 4 )
f y = 1 MN &Sigma; i = 1 M - 1 &Sigma; j = 0 N - 1 [ f ( i , j ) - f ( i - 1 , j ) ] 2 - - - ( 5 )
Wherein, M is source images line number, and N is source images columns;
2. compare according to formula (6) fusion coefficients obtaining all directions subband:
H F , ij m , k = B MS , ij h , m , if CSF ( B MS , ij h , m ) > CSF ( I PAN , ij h , m ) I PAN , ij h , m , if CSF ( B MS , ij h , m ) < CSF ( I PAN , ij h , m ) - - - ( 6 )
Wherein, m and k refers to that m layer decomposes respectively, a kth directional subband, it is high-frequency sub-band at the coefficient that (i, j) puts;
3. Shearlet inverse transformation is utilized to reconstruct L f, ijwith obtain fusion coefficients F i, utilize formula (7) to obtain new luminance component P i':
P I'=F I+res (7)
Wherein, res is the surplus that step (1) obtains;
(4) by new luminance component P i', chromatic component P s, saturation degree component P hcarry out IHS inverse transformation, obtain final fused images.
The present invention's advantage is compared with prior art:
BEMD and human eye contrast sensory characteristic apply in remote sensing image fusion by the present invention, first source images realizes color space conversion through IHS conversion, I component is obtained comprising the intrinsic mode function of high fdrequency component and comprising the surplus of low frequency component by two-dimensional empirical mode decomposition, then Pan image and IMFs are decomposed by Shearlet conversion the coefficient of dissociation obtained under different scale multiple directions, fusion rule considers human eye characteristic, utilize based on color contrast sensitivity excitation binary channels PCNN and laplacian pyramid select fusion coefficients, fused images is obtained again by the reconstruct of IHS inverse transformation, new algorithm make use of the advantage that Shearlet conversion can realize the expression of image optimum.
For improving the spectrum distortion that traditional fusion method causes, the present invention utilizes color contrast sensitivity characteristic, first I component is obtained by IHS conversion, realize multispectral image spectral information (the Res component in I component) in conjunction with BEMD to be again separated with spatial information (the BIMFs component in I component), spatial information in Pan image is injected into fused images by the mode merged with IMFs, in order to meet human eye characteristic, utilize Shearlet to convert and respectively Pan image and IMFs are decomposed into the coefficient under different scale multiple directions, select to merge high frequency coefficient by the binary channels PCNN encouraged based on color contrast susceptibility, realize the extraction of color background hypograph feature, blending algorithm protects the spectral information of image as far as possible while strengthening spatial detail expressive ability, improve syncretizing effect.
Accompanying drawing explanation
Fig. 1 is that schematic flow sheet is merged in the present invention.
Fig. 2 is various blending algorithm effectiveness comparison schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
As shown in Figure 1, a kind of Shearlet territory based on spectral characteristic is multispectral comprises the following steps with panchromatic image fusion method:
(1) IHS conversion is carried out to multispectral image, obtain luminance component P i, chromatic component P s, saturation degree component P h, by the luminance component P of multispectral image iobtain accumulateing modular function BIMF in a frequency n-1 from high to low two dimension by two-dimensional empirical mode decomposition BEMD t, t=1,2 ... n-1, n>3 and surplus res;
(2) modular function BIMF is accumulate in calculating n-1 two dimension tenergy to go forward side by side line ordering, first three component sum B of statistics energy ordering mS, by B mSwith full-colour image I pANcarry out Shearlet respectively and convert three layers of decomposition, obtain a respective low frequency sub-band: with three layers of high-frequency sub-band: m=1,2,3, wherein 1-3 layer comprises 10 respectively, 10,18 directional subbands; Modular function BIMF is accumulate in two dimension tthe computing formula of energy is such as formula shown in (1):
E ( BIMF t ) = ( &Sigma; x = 1 i &Sigma; y = 1 j D ( x , y ) 2 t ) - - - ( 1 )
Wherein, D (x, y) tfor BIMF tcoefficient, i, j represent BIMF respectively tline number and columns;
(3) set up fusion rule and select fusion coefficients: utilize Laplacian pyramid low frequency coefficient, obtain corresponding coefficient of dissociation L mS, ijand L pAN, ij, the large method of relatively getting shown in through type (2) selects low frequency fusion coefficients L f, ij:
L F , ij = L MS , ij , if ( L MS , ij &GreaterEqual; L PAN , ij ) L PAN , ij , if ( L MS , ij < L PAN , ij ) - - - ( 2 )
Corresponding high frequency coefficient is merged by the fusion rule formulated based on binary channels Pulse Coupled Neural Network PCNN, specific as follows:
1. will be input to two input ends of binary channels Pulse Coupled Neural Network PCNN respectively, utilize the external drive of CSF factor CSF as binary channels Pulse Coupled Neural Network PCNN of formula (3) design factor;
CSF(f)=2.6×(0.0192+0.114f)exp[-(0.114f) 1.1] (3)
Wherein, it is spatial frequency; f x, f ythe spatial frequency of level and vertical direction respectively,
f x = 1 MN &Sigma; i = 0 M - 1 &Sigma; j = 1 N - 1 [ f ( i , j ) - f ( i , j - 1 ) ] 2 - - - ( 4 )
f y = 1 MN &Sigma; i = 1 M - 1 &Sigma; j = 0 N - 1 [ f ( i , j ) - f ( i - 1 , j ) ] 2 - - - ( 5 )
Wherein, M is source images line number, and N is source images columns;
2. compare according to formula (6) fusion coefficients obtaining all directions subband:
H F , ij m , k = B MS , ij h , m , if CSF ( B MS , ij h , m ) > CSF ( I PAN , ij h , m ) I PAN , ij h , m , if CSF ( B MS , ij h , m ) < CSF ( I PAN , ij h , m ) - - - ( 6 )
Wherein, m and k refers to that m layer decomposes respectively, a kth directional subband; it is high-frequency sub-band at the coefficient that (i, j) puts, be high-frequency sub-band, obtained by step (2).
3. Shearlet inverse transformation is utilized to reconstruct L f, ijwith obtain fusion coefficients F i, utilize formula (7) to obtain new luminance component P i':
P I'=F I+res (7)
Wherein, res is the surplus obtained in step (1);
(4) by new luminance component P i', chromatic component P s, saturation degree component P hcarry out IHS inverse transformation, the image after finally being merged.
As shown in Figure 2, the effect that compares with other several fusion method of the inventive method.Figure (a) is Pan image; figure (b) is MS image; figure (c)-(g) is respectively the effect utilizing and carry out image co-registration based on GIHS, wavelet transform (DWT), principal component analysis (PCA) (PCA), Brovey and the inventive method; by relatively seeing; utilize the inventive method to carry out image co-registration and not only effectively enrich the background information of image; and the details protected to greatest extent in image, meet human-eye visual characteristic.
In objective evaluation Indexes Comparison table shown in table 1, by standard deviation (Std), information entropy (En), spatial frequency (SF), average gradient (Gradient), related coefficient (CC), spectrum torsion resistance (WD) weighs the fused image quality that different fusion method obtains, standard deviation (Std), information entropy (En), spatial frequency (SF), average gradient (Gradient), related coefficient (CC) is larger, illustrate that in fused images, characteristic information is abundanter, more close with source images, syncretizing effect is better, spectrum torsion resistance (WD) weighs spectrum distortion level, be worth the distortion of larger explanation spectrum more serious.Can be seen by data in table 1, the first five index of this method is compared with additive method and is all had clear improvement, the fused images that display this method generates has larger partial gradient, grey level distribution is disperseed more, image texture is abundanter, and details is given prominence to, and spectrum torsion resistance (WD) is that all methods are minimum, illustrate that the loss of the inventive method spectral information is less, syncretizing effect is better.
Table 1

Claims (1)

1. an and panchromatic image fusion method multispectral based on the Shearlet territory of spectral characteristic, is characterized in that, comprise the following steps:
(1) IHS conversion is carried out to multispectral image, obtain luminance component P i, chromatic component P s, saturation degree component P h, by the luminance component P of multispectral image iobtain accumulateing modular function BIMF in n-1 two dimension that frequency arranges from high to low by two-dimensional empirical mode decomposition BEMD t, t=1,2 ... n-1, n>3 and surplus res;
(2) modular function BIMF is accumulate in calculating n-1 two dimension tenergy and sort, first three component sum B of statistics energy ordering mS, by B mSwith full-colour image I pANdecompose respectively by Shearlet conversion, Decomposition order is three layers, obtains a respective low frequency sub-band: with three layers of high-frequency sub-band: m=1,2,3, accumulate modular function BIMF in two dimension tthe computing formula of energy is such as formula shown in (1):
E ( BIMF t ) = ( &Sigma; x = 1 i &Sigma; y = 1 j D ( x , y ) 2 t ) - - - ( 1 )
Wherein, D (x, y) tfor BIMF tcoefficient, i, j represent BIMF respectively tline number and columns;
(3) set up fusion rule and select fusion coefficients: utilize Laplacian pyramid low frequency coefficient, obtain corresponding coefficient of dissociation L mS, ijand L pAN, ij, the large method of relatively getting shown in through type (2) selects low frequency fusion coefficients L f, ij:
L F , ij = L MS , ij if ( L MS , ij &GreaterEqual; L PAN , ij ) L PAN , ij , if ( L MS , ij < L PAN , ij ) - - - ( 2 )
Corresponding high frequency coefficient is merged by the fusion rule formulated based on binary channels Pulse Coupled Neural Network, specific as follows:
1. will input two input ends of binary channels Pulse Coupled Neural Network respectively, utilize the external drive of CSF factor CSF as binary channels Pulse Coupled Neural Network of formula (3) design factor;
CSF(f)=2.6×(0.0192+0.114f)exp[-(0.114f) 1.1] (3)
Wherein, it is spatial frequency; f x, f ythe spatial frequency of level and vertical direction respectively,
f x = 1 MN &Sigma; i = 0 M - 1 &Sigma; j = 1 N - 1 [ f ( i , j ) - f ( i , j - 1 ) ] 2 - - - ( 4 )
f y = 1 MN &Sigma; i = 1 M - 1 &Sigma; j = 0 N - 1 [ f ( i , j ) - f ( i - 1 , j ) ] 2 - - - ( 5 )
Wherein, M is source images line number, and N is source images columns;
2. compare according to formula (6) fusion coefficients obtaining all directions subband:
H F , ij m , k = B MS , ij h , m , if CSF ( B MS , ij h , m ) > CSF ( I PAN , ij h , m ) I PAN , ij h , m , if CSF ( B MS , ij h , m ) < CSF ( I PAN , ij h , m ) - - - ( 6 )
Wherein, m and k refers to that m layer decomposes respectively, a kth directional subband, it is high-frequency sub-band at the coefficient that (i, j) puts;
3. Shearlet inverse transformation is utilized to reconstruct L f, ijwith obtain fusion coefficients F i, utilize formula (7) to obtain new luminance component P i':
P I'=F I+res (7)
Wherein, res is the surplus that step (1) obtains;
(4) by new luminance component P i', chromatic component P s, saturation degree component P hcarry out IHS inverse transformation, obtain final fused images.
CN201510066167.2A 2015-02-09 2015-02-09 A kind of Shearlet domains based on spectral characteristic are multispectral and panchromatic image fusion method Expired - Fee Related CN104616261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510066167.2A CN104616261B (en) 2015-02-09 2015-02-09 A kind of Shearlet domains based on spectral characteristic are multispectral and panchromatic image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510066167.2A CN104616261B (en) 2015-02-09 2015-02-09 A kind of Shearlet domains based on spectral characteristic are multispectral and panchromatic image fusion method

Publications (2)

Publication Number Publication Date
CN104616261A true CN104616261A (en) 2015-05-13
CN104616261B CN104616261B (en) 2017-10-27

Family

ID=53150694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510066167.2A Expired - Fee Related CN104616261B (en) 2015-02-09 2015-02-09 A kind of Shearlet domains based on spectral characteristic are multispectral and panchromatic image fusion method

Country Status (1)

Country Link
CN (1) CN104616261B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354802A (en) * 2015-10-23 2016-02-24 哈尔滨工业大学 Hyperspectral image continuous spectrum section recovery method based on bidirectional gradient prediction
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106981059A (en) * 2017-03-30 2017-07-25 中国矿业大学 With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
CN107545554A (en) * 2017-04-25 2018-01-05 西北工业大学 A kind of two-stage remote sensing image fusion method based on Steerable filter
CN107622479A (en) * 2017-09-04 2018-01-23 南京理工大学 A kind of profile marble band adaptive detailing method for implanting of the panchromatic sharpening of multispectral image
CN108765359A (en) * 2018-05-31 2018-11-06 安徽大学 A kind of fusion method of target in hyperspectral remotely sensed image and full-colour image based on JSKF models and NSCT technologies
CN109215008A (en) * 2018-08-02 2019-01-15 上海海洋大学 A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109272010A (en) * 2018-07-27 2019-01-25 吉林大学 Multi-scale Remote Sensing Image fusion method based on convolutional neural networks
CN109410164A (en) * 2018-11-14 2019-03-01 西北工业大学 The satellite PAN and multi-spectral image interfusion method of multiple dimensioned convolutional neural networks
CN109598701A (en) * 2018-10-29 2019-04-09 同济大学 A kind of non-supervisory change detecting method of multi-spectrum remote sensing image based on Information expansion
CN109658367A (en) * 2018-11-14 2019-04-19 国网新疆电力有限公司信息通信公司 Image interfusion method based on Color transfer
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120269430A1 (en) * 2011-04-22 2012-10-25 Michael Paul Deskevich System and method for combining color information with spatial information in multispectral images
CN103037168A (en) * 2012-12-10 2013-04-10 内蒙古科技大学 Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
FR2994007A1 (en) * 2012-07-30 2014-01-31 Centre Nat Etd Spatiales Method for synthesizing high space resolution multispectral image from panchromatic image, involves merging panchromatic image and color density image converted on geometrical scale to produce high spatial resolution multispectral image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120269430A1 (en) * 2011-04-22 2012-10-25 Michael Paul Deskevich System and method for combining color information with spatial information in multispectral images
FR2994007A1 (en) * 2012-07-30 2014-01-31 Centre Nat Etd Spatiales Method for synthesizing high space resolution multispectral image from panchromatic image, involves merging panchromatic image and color density image converted on geometrical scale to produce high spatial resolution multispectral image
CN103037168A (en) * 2012-12-10 2013-04-10 内蒙古科技大学 Stable Surfacelet domain multi-focus image fusion method based on compound type pulse coupled neural network (PCNN)
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张宝华 等: "一种复合型PCNN 的NSCT 域多聚焦图像融合方法", 《小型微型计算机系统》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354802A (en) * 2015-10-23 2016-02-24 哈尔滨工业大学 Hyperspectral image continuous spectrum section recovery method based on bidirectional gradient prediction
CN105404855A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Image processing methods and devices
CN106327459B (en) * 2016-09-06 2019-03-12 四川大学 Visible light and infrared image fusion method based on UDCT and PCNN
CN106327459A (en) * 2016-09-06 2017-01-11 四川大学 Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN106981059A (en) * 2017-03-30 2017-07-25 中国矿业大学 With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
CN107545554A (en) * 2017-04-25 2018-01-05 西北工业大学 A kind of two-stage remote sensing image fusion method based on Steerable filter
CN107622479A (en) * 2017-09-04 2018-01-23 南京理工大学 A kind of profile marble band adaptive detailing method for implanting of the panchromatic sharpening of multispectral image
CN107622479B (en) * 2017-09-04 2020-11-27 南京理工大学 Contour wave sub-band self-adaptive detail injection method for multi-spectral-band image panchromatic sharpening
CN108765359A (en) * 2018-05-31 2018-11-06 安徽大学 A kind of fusion method of target in hyperspectral remotely sensed image and full-colour image based on JSKF models and NSCT technologies
CN108765359B (en) * 2018-05-31 2021-08-24 安徽大学 Fusion method of hyperspectral remote sensing image and full-color image based on JSK model and NSCT technology
CN109272010A (en) * 2018-07-27 2019-01-25 吉林大学 Multi-scale Remote Sensing Image fusion method based on convolutional neural networks
CN109272010B (en) * 2018-07-27 2021-06-29 吉林大学 Multi-scale remote sensing image fusion method based on convolutional neural network
CN109215008A (en) * 2018-08-02 2019-01-15 上海海洋大学 A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation
CN109242812A (en) * 2018-09-11 2019-01-18 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on conspicuousness detection and singular value decomposition
CN109598701A (en) * 2018-10-29 2019-04-09 同济大学 A kind of non-supervisory change detecting method of multi-spectrum remote sensing image based on Information expansion
CN109598701B (en) * 2018-10-29 2021-03-26 同济大学 Multispectral remote sensing image unsupervised change detection method based on information expansion
CN109410164A (en) * 2018-11-14 2019-03-01 西北工业大学 The satellite PAN and multi-spectral image interfusion method of multiple dimensioned convolutional neural networks
CN109658367A (en) * 2018-11-14 2019-04-19 国网新疆电力有限公司信息通信公司 Image interfusion method based on Color transfer
CN109658367B (en) * 2018-11-14 2022-11-25 国网新疆电力有限公司信息通信公司 Image fusion method based on color transfer
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network

Also Published As

Publication number Publication date
CN104616261B (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN104616261A (en) Method for fusing Shearlet domain multi-spectral and full-color images based on spectral characteristics
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN101504766B (en) Image amalgamation method based on mixed multi-resolution decomposition
CN109272010B (en) Multi-scale remote sensing image fusion method based on convolutional neural network
Gharbia et al. Multi-spectral and panchromatic image fusion approach using stationary wavelet transform and swarm flower pollination optimization for remote sensing applications.
CN105096280B (en) Handle the method and device of picture noise
Saeedi et al. A new pan-sharpening method using multiobjective particle swarm optimization and the shiftable contourlet transform
CN109767412A (en) A kind of remote sensing image fusing method and system based on depth residual error neural network
CN102800070B (en) Multi-modality image fusion method based on region and human eye contrast sensitivity characteristic
WO2013172471A1 (en) Image processing device, image processing method, program, print medium, and recording medium
CN106327459A (en) Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
CN105139371A (en) Multi-focus image fusion method based on transformation between PCNN and LP
Kanmani et al. An optimal weighted averaging fusion strategy for remotely sensed images
Ablin et al. An investigation in satellite images based on image enhancement techniques
Yang et al. Pansharpening based on low-rank fuzzy fusion and detail supplement
CN106101490A (en) Video based on time and space significance is across dimension self-adaption Enhancement Method and device
Wu et al. Pansharpening using unsupervised generative adversarial networks with recursive mixed-scale feature fusion
Yang et al. An overview on pixel-level image fusion in remote sensing
Lei et al. Convolution neural network with edge structure loss for spatiotemporal remote sensing image fusion
Pande-Chhetri et al. Filtering high-resolution hyperspectral imagery in a maximum noise fraction transform domain using wavelet-based de-striping
CN116310452B (en) Multi-view clustering method and system
Salem et al. Image fusion models and techniques at pixel level
Huang Wavelet for image fusion
Tanaka et al. Adaptive directional wavelet transform based on directional prefiltering
CN107622479B (en) Contour wave sub-band self-adaptive detail injection method for multi-spectral-band image panchromatic sharpening

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171027

CF01 Termination of patent right due to non-payment of annual fee