CN111507454B - Improved cross cortical neural network model for remote sensing image fusion - Google Patents

Improved cross cortical neural network model for remote sensing image fusion Download PDF

Info

Publication number
CN111507454B
CN111507454B CN201910090285.5A CN201910090285A CN111507454B CN 111507454 B CN111507454 B CN 111507454B CN 201910090285 A CN201910090285 A CN 201910090285A CN 111507454 B CN111507454 B CN 111507454B
Authority
CN
China
Prior art keywords
image
neural network
network model
remote sensing
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910090285.5A
Other languages
Chinese (zh)
Other versions
CN111507454A (en
Inventor
李小军
禄小敏
杨树文
闫浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou Jiaotong University
Original Assignee
Lanzhou Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou Jiaotong University filed Critical Lanzhou Jiaotong University
Priority to CN201910090285.5A priority Critical patent/CN111507454B/en
Publication of CN111507454A publication Critical patent/CN111507454A/en
Application granted granted Critical
Publication of CN111507454B publication Critical patent/CN111507454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The optical sensor limits the shot multi-hyperspectral image, and the spatial resolution of the shot multi-hyperspectral image is inevitably sacrificed while the high spectral resolution is obtained. The invention provides an improved cross cortical neural network model, which can fuse and inject high spatial resolution detail information into a plurality of hyperspectral remote sensing images, thereby obtaining a fused image with both high spatial resolution and spectral resolution. The comparison experiment result shows that the method is superior to the classic remote sensing image fusion method, and has smaller spectrum distortion and detail distortion.

Description

Improved cross cortical neural network model for remote sensing image fusion
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a fusion method of multi-hyperspectral remote sensing images.
Background
Multispectral and hyperspectral remote sensing images are important data sources for classifying and interpreting remote sensing images, but due to the limitation of the signal-to-noise ratio of the sensor and a communication downlink, the interpretation and monitoring of rich spectral information on complex targets become very troublesome at the beginning of the design of an optical remote sensing sensor, the practical application of hyperspectral images is greatly limited, and therefore, a remote sensing image fusion technology is required to be utilized to fuse the high-spatial-resolution images and the hyperspectral images, and the fusion result has high spatial resolution and spectral resolution at the same time.
The invention provides an improved cross cortical neural network model, which is applied to the fusion of multi-hyperspectral remote sensing images.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention aims to provide an improved cross cortical neural network model, solve the fusion problem of multispectral and hyperspectral remote sensing images, ensure that the fused images have high spatial resolution and spectral resolution, better keep the spatial detail characteristics and greatly reduce the spectrum distortion in the fusion process.
In order to achieve the above object, the present invention provides an improved cross cortical neural network model, which has neuron mathematical expression:
Figure DEST_PATH_IMAGE002
E ij [n]=gE ij [n-1]+hY ij [n-1]
Figure 825802DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,ija representation of the current neuron is presented,klthe number of the neighbor neurons is represented,nfor the current number of iterations,Wandαrespectively a neighborhood connection strength matrix and a connection coefficient,Sis a plurality of high lightsThe spectral image is obtained by taking the image of the spectrum,Dfor a detailed image with a high spatial resolution,gandhrespectively the attenuation coefficient and the normalization constant,Eis the threshold value of the activity of the user,Yin order to output the pulses, the pulse generator is provided with a pulse generator,Frepresents the output fusion result once in the current iterationF ij Greater than an activity thresholdE ij Neuron and its useijIs excited at the firstnGenerating an output pulse in a sub-iterationY ij The final fusion result is obtained when all neurons in the neural network are excitedF
In order to adapt to the remote sensing image fusion algorithm, each pixel of the remote sensing image corresponds to each neuron in the neural network model one by one, before the neural network model is used for processing multi-hyperspectral and high-spatial-resolution images, standardization operation needs to be carried out on input images, and pixel values of the input images are standardized to be 0,1]And histogram matching operation is carried out on the standardized images to obtain the standardized hyperspectral imagesSAnd high spatial resolution imagesHFor high spatial resolution imageHPerforming Gaussian smoothing filtering to obtain smoothed imageHLWherein the distribution parameters of the Gaussian filterσComprises the following steps:
Figure 468171DEST_PATH_IMAGE004
wherein the content of the first and second substances,Mwhich is the length of the filter, is,Ris a spatial scale scaling factor between the hyperspectral image and the high spatial resolution image,Gthe modulation transfer function of the hyperspectral image sensor is used, so that a detailed image can be obtainedD=H-HLAfter obtaining the standardized hyperspectral imagesSAnd detail imageDAnd then, performing iterative computation by taking the improved cross cortical neural network model as an input.
The initial value of each variable of the neural network is set as,Y[0]=F[0]=0,E[0]=1,n=1;αthe calculation is as follows:
Figure 871470DEST_PATH_IMAGE005
where Std and Con represent standard deviation and covariance calculations, respectively.
The neural network generates current iteration times for each iterationnPerforming an add-one operation until all neurons are fired to obtain an outputFTo is aligned withFPerforming inverse normalization, i.e. expansionFAnd obtaining a fusion result of one hyperspectral image by the value range of the middle pixel value, and respectively performing the fusion processing on K channels by setting the total number of the hyperspectral image channels as K to obtain a final fusion result of the hyperspectral image and the high spatial resolution image with the K channels.
The invention has the beneficial effects that: 1. the traditional cross cortical neural network model only allows one external excitation input, and the improved model has two external excitation inputsSAndDthe method is beneficial to more conveniently applying the cross cortical neural network principle to image fusion; 2. the model of the invention can be applied to remote sensing image fusion with different scales due to the consideration of detail injection operation; 3. the model of the invention can better keep the detail characteristics of the high-spatial resolution image and greatly reduce the spectrum distortion of the fusion result.
Drawings
Fig. 1 is a flow chart of a remote sensing image fusion method of the present invention.
FIG. 2 is a diagram of a model architecture of the improved cross cortical neural network of the present invention.
FIG. 3 shows an input image and a fusion result according to an embodiment of the present invention.
Detailed Description
In order to make the technical means, the objects and the effects achieved by the present invention easily understandable, the present invention is further described below.
The flow chart of the remote sensing image fusion method of the invention is shown in figure 1, and the whole flow is that firstly, the input high spatial resolution image and multi-high spectral image are processed by [0,1 ]]Carrying out standardization processing on intervals; secondly, extracting details of the standardized high-spatial-resolution images, and sending the detail images and the high-spatial-resolution images into the model of the invention as input, wherein the improved cross cortical neural network of the inventionThe structure of the network model is shown in FIG. 2, the network parameters are set as neighborhood connection strength matrixW=[0.5,1,0.5;1,0,1;0.5,1,0.5]Coefficient of attenuationg=0.65, normalized constanth=20。
Neuron and its useijCorresponding to image pixels one by one, and obtaining the final fusion result when all neurons in the neural network are excitedFAnd respectively executing the operations on the K channels of the hyperspectrum to obtain a final K independent channel fusion result.
The input high spatial resolution gray scale image, multi-hyperspectral image and fusion result are respectively shown in fig. 3, wherein fig. 3(a) is an input high spatial resolution panchromatic gray scale image, fig. 3(b) is an input multi-hyperspectral image, the input image is collected in a Quickbird high-resolution sensor, the spatial resolutions are 0.7m and 2.8m respectively, fig. 3(c) is a fusion result, and as can be seen from fig. 3, the remote sensing fusion method simultaneously obtains high spatial and high spectral resolutions, and the details and spectral characteristics are kept well.
Table 1 shows the evaluation and comparison results of the method of the present invention and other classic remote sensing image fusion methods such as Gram-Schmidt fusion method, brooey transformation fusion method, principal component analysis PCA fusion method, IHS fusion method, etc., wherein the comparison and evaluation indexes adopt spectral angle matching degree SAM, relative global error ERGAS and Q index indexes, and the mathematical expression of the evaluation indexes is as follows:
Figure 566631DEST_PATH_IMAGE006
Figure 258644DEST_PATH_IMAGE007
Figure 995656DEST_PATH_IMAGE008
wherein the content of the first and second substances,<>indicating an inner product operation, RMSE stands for a root mean square operation,σandμrespectively representing the covariance and the mean value of the image, and evaluating the spectral angle matching degree SAM in the index to the spectrum loss of the remote sensing imageThe smaller the value of the true measurement is, the better the fusion effect is, the smaller the relative global error ERGAS represents the detail distortion degree between the fusion result and the high-spatial-resolution image, the better the fusion effect is, the Q index is the comprehensive evaluation on the spectrum distortion and the spatial detail retention of the fusion image, and the larger the value is, the better the fusion quality is.
The evaluation index calculation results in table 1 show that the Q index indexes of the method of the present invention are all higher than those of other classical Gram-Schmidt fusion methods, Brovey transformation fusion methods, principal component analysis PCA fusion methods, IHS fusion methods, etc., and simultaneously, the spectral distortion index SAM and detail distortion index ERGAS of the method of the present invention are both smaller than those of other classical algorithms, which shows that the method of the present invention is greatly superior to the classical methods in the retention of spectral distortion and spatial details.
Figure 456724DEST_PATH_IMAGE009

Claims (2)

1. A construction method of an improved cross cortical neural network model for remote sensing image fusion is characterized in that the improved cross cortical neural network model specifically comprises the following steps:
Figure FDA0003693097040000011
E ij [n]=gE ij [n-1]+hY ij [n-1]
Figure FDA0003693097040000012
wherein ij represents a current neuron, kl represents a neighborhood neuron, n represents a current iteration number, W and alpha are a neighborhood connection strength matrix and a connection coefficient respectively, S represents a multi-hyperspectral image, D represents a detailed image with high spatial resolution, g and h represent an attenuation coefficient and a standardized constant respectively, E represents an activity threshold, Y represents an output pulse, and F represents an output fusion result.
2. An application method of an improved cross cortical neural network model for remote sensing image fusion is characterized by comprising the following steps:
step 1: normalizing input hyperspectral and high spatial resolution images to [0, 1%]And performing a histogram matching operation to obtain S k Wherein K is 1, …, and K is a spectrum channel number;
step 2: performing Gaussian low-pass filtering meeting a modulation transfer function on the high-spatial-resolution image to obtain a detail image D;
and step 3: performing the improved cross-cortical neural network model of claim 1 for each k-channel until all neurons are fired, obtaining an output F k
And 4, step 4: to output F k The pixel values are subjected to inverse normalization to obtain the final fusion result.
CN201910090285.5A 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion Active CN111507454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910090285.5A CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910090285.5A CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Publications (2)

Publication Number Publication Date
CN111507454A CN111507454A (en) 2020-08-07
CN111507454B true CN111507454B (en) 2022-09-06

Family

ID=71863783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910090285.5A Active CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Country Status (1)

Country Link
CN (1) CN111507454B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489111A (en) * 2003-08-21 2004-04-14 上海交通大学 Remote-sensing image mixing method based on local statistical property and colour space transformation
CN101577003A (en) * 2009-06-05 2009-11-11 北京航空航天大学 Image segmenting method based on improvement of intersecting visual cortical model
JP2011090309A (en) * 2009-10-23 2011-05-06 Ana-Aeroportos De Portugal Sa Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor
CN102651132A (en) * 2012-04-06 2012-08-29 华中科技大学 Medical image registration method based on intersecting cortical model
CN103049898A (en) * 2013-01-27 2013-04-17 西安电子科技大学 Method for fusing multispectral and full-color images with light cloud
CN103177431A (en) * 2012-12-26 2013-06-26 中国科学院遥感与数字地球研究所 Method of spatial-temporal fusion for multi-source remote sensing data
CN103295201A (en) * 2013-05-31 2013-09-11 中国人民武装警察部队工程大学 Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN105160647A (en) * 2015-10-28 2015-12-16 中国地质大学(武汉) Panchromatic multi-spectral image fusion method
CN105913075A (en) * 2016-04-05 2016-08-31 浙江工业大学 Endoscopic image focus identification method based on pulse coupling nerve network
CN107341501A (en) * 2017-05-31 2017-11-10 三峡大学 A kind of image interfusion method and device based on PCNN and classification focusing technology

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489111A (en) * 2003-08-21 2004-04-14 上海交通大学 Remote-sensing image mixing method based on local statistical property and colour space transformation
CN101577003A (en) * 2009-06-05 2009-11-11 北京航空航天大学 Image segmenting method based on improvement of intersecting visual cortical model
JP2011090309A (en) * 2009-10-23 2011-05-06 Ana-Aeroportos De Portugal Sa Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor
CN102651132A (en) * 2012-04-06 2012-08-29 华中科技大学 Medical image registration method based on intersecting cortical model
CN103177431A (en) * 2012-12-26 2013-06-26 中国科学院遥感与数字地球研究所 Method of spatial-temporal fusion for multi-source remote sensing data
CN103049898A (en) * 2013-01-27 2013-04-17 西安电子科技大学 Method for fusing multispectral and full-color images with light cloud
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN103295201A (en) * 2013-05-31 2013-09-11 中国人民武装警察部队工程大学 Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
CN105160647A (en) * 2015-10-28 2015-12-16 中国地质大学(武汉) Panchromatic multi-spectral image fusion method
CN105913075A (en) * 2016-04-05 2016-08-31 浙江工业大学 Endoscopic image focus identification method based on pulse coupling nerve network
CN107341501A (en) * 2017-05-31 2017-11-10 三峡大学 A kind of image interfusion method and device based on PCNN and classification focusing technology

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Hong Li 等.Fusion of Multispectral and Panchromatic Images via Local Geometrical Similarity.《Technical Gazette 》.2018,第25卷(第2期), *
Ulf Ekblad 等.Theoretical foundation of the intersecting cortical model and its use for change detection of aircraft, cars, and nuclear explosion tests.《Signal Processing》.2004, *
Xin Jin 等.Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks.《Journal of Applied Remote Sensing》.2016, *
戴文战 等.改进交叉视觉皮质模型的医学图像融合方法.《计算机应用研究》.2015,第33卷(第9期), *
王密 等.自适应高斯滤波与SFIM 模型相结合的全色多光谱影像融合方法.《测绘学报》.2018,第47卷(第1期), *

Also Published As

Publication number Publication date
CN111507454A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
Wang et al. Dnu: Deep non-local unrolling for computational spectral imaging
Wang et al. Hyperspectral image reconstruction using a deep spatial-spectral prior
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN112287978A (en) Hyperspectral remote sensing image classification method based on self-attention context network
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN104851077B (en) A kind of panchromatic sharpening method of adaptive remote sensing images
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN108288256B (en) Multispectral mosaic image restoration method
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
Tao et al. Hyperspectral image recovery based on fusion of coded aperture snapshot spectral imaging and RGB images by guided filtering
CN111696043A (en) Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN
CN110060225B (en) Medical image fusion method based on rapid finite shear wave transformation and sparse representation
Lin et al. Integrating model-and data-driven methods for synchronous adaptive multi-band image fusion
Kwasniewska et al. Super-resolved thermal imagery for high-accuracy facial areas detection and analysis
CN107680081B (en) Hyperspectral image unmixing method based on convolutional neural network
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN114998167A (en) Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN111160392A (en) Hyperspectral classification method based on wavelet width learning system
CN114972885A (en) Multi-modal remote sensing image classification method based on model compression
CN115147321A (en) Multi-spectral image fusion method based on interpretable neural network
CN109271874B (en) Hyperspectral image feature extraction method fusing spatial and spectral information
Huang et al. Deep gaussian scale mixture prior for image reconstruction
Xiong et al. Gradient boosting for single image super-resolution
CN111507454B (en) Improved cross cortical neural network model for remote sensing image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant