CN111507454A - Improved cross cortical neural network model for remote sensing image fusion - Google Patents

Improved cross cortical neural network model for remote sensing image fusion Download PDF

Info

Publication number
CN111507454A
CN111507454A CN201910090285.5A CN201910090285A CN111507454A CN 111507454 A CN111507454 A CN 111507454A CN 201910090285 A CN201910090285 A CN 201910090285A CN 111507454 A CN111507454 A CN 111507454A
Authority
CN
China
Prior art keywords
neural network
remote sensing
network model
image
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910090285.5A
Other languages
Chinese (zh)
Other versions
CN111507454B (en
Inventor
李小军
禄小敏
杨树文
闫浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanzhou Jiaotong University
Original Assignee
Lanzhou Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanzhou Jiaotong University filed Critical Lanzhou Jiaotong University
Priority to CN201910090285.5A priority Critical patent/CN111507454B/en
Publication of CN111507454A publication Critical patent/CN111507454A/en
Application granted granted Critical
Publication of CN111507454B publication Critical patent/CN111507454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The optical sensor limits the shot multi-hyperspectral image, and the spatial resolution of the shot multi-hyperspectral image is inevitably sacrificed while the high spectral resolution is obtained. The invention provides an improved cross cortical neural network model, which can fuse and inject high-spatial resolution detail information into a multi-hyperspectral remote sensing image, thereby obtaining a fused image with both high spatial resolution and spectral resolution. The comparison experiment result shows that the method is superior to the classic remote sensing image fusion method, and has smaller spectrum distortion and detail distortion.

Description

Improved cross cortical neural network model for remote sensing image fusion
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a fusion method of multi-hyperspectral remote sensing images.
Background
Multispectral and hyperspectral remote sensing images are important data sources for classifying and interpreting remote sensing images, but due to the limitation of the signal-to-noise ratio of the sensor and a communication downlink, the interpretation and monitoring of rich spectral information on complex targets become very troublesome at the beginning of the design of an optical remote sensing sensor, the practical application of hyperspectral images is greatly limited, and therefore, a remote sensing image fusion technology is required to be utilized to fuse the high-spatial-resolution images and the hyperspectral images, and the fusion result has high spatial resolution and spectral resolution at the same time.
The invention provides an improved cross cortical neural network model, which is applied to the fusion of multi-hyperspectral remote sensing images.
Disclosure of Invention
In order to make up the defects of the prior art, the invention aims to provide an improved cross cortical neural network model, solve the problem of fusion of multispectral and hyperspectral remote sensing images, ensure that the fused image has high spatial resolution and spectral resolution, better keep the spatial detail characteristics and greatly reduce the spectrum distortion in the fusion process.
In order to achieve the above object, the present invention provides an improved cross cortical neural network model, the neuron mathematical expression of which is:
Figure 664928DEST_PATH_IMAGE001
Figure 126871DEST_PATH_IMAGE002
Figure 331587DEST_PATH_IMAGE003
wherein the content of the first and second substances,ija representation of the current neuron is presented,klthe number of the neighbor neurons is represented,nfor the current number of iterations,Wandαrespectively a neighborhood connection strength matrix and a connection coefficient,Sthe image of the plurality of high spectrums is obtained,Dfor a detailed image with a high spatial resolution,gandhrespectively the attenuation coefficient and the normalization constant,Eis the value of the activity threshold value and,Yin order to output the pulses, the pulse generator is provided with a pulse generator,Frepresentation output fusionAs a result, once in the current iterationF ij Greater than an activity thresholdE ij Neuron and its useijIs excited at the firstnGenerating an output pulse in a sub-iterationY ij The final fusion result is obtained when all neurons in the neural network are firedF
In order to adapt to the remote sensing image fusion algorithm, each pixel of the remote sensing image corresponds to each neuron in the neural network model one by one, before the neural network model is used for processing multi-hyperspectral and high-spatial-resolution images, standardization operation needs to be carried out on input images, and pixel values of the input images are standardized to be 0,1]And performing histogram matching operation on the standardized image to obtain a standardized hyperspectral imageSAnd high spatial resolution imagesHFor high spatial resolution imagesHPerforming Gaussian smoothing filtering to obtain smoothed imageHLWherein the distribution parameters of the Gaussian filterσComprises the following steps:
Figure 468171DEST_PATH_IMAGE004
wherein the content of the first and second substances,Mfor the length of the filter to be used,Ris a spatial scale scaling factor between the hyperspectral image and the high spatial resolution image,Gis a modulation transfer function of a hyperspectral image sensor, so that a detail image can be obtainedD=H-HLAfter obtaining the standardized hyperspectral imagesSAnd detail imageDAnd then, performing iterative computation by taking the improved cross cortical neural network model as an input.
The initial value of each variable of the neural network is set as,Y[0]=F[0]=0,E[0]=1,n=1;αthe calculation is as follows:
Figure 871470DEST_PATH_IMAGE005
where Std and Con represent standard deviation and covariance calculations, respectively.
The neural network is used for counting the current iteration number once per iterationnPerforming an add-on operationUntil all neurons are fired to obtain an outputFTo, forFPerforming inverse normalization, i.e. expansionFAnd obtaining a fusion result of one hyperspectral image by the value range of the middle pixel value, and respectively performing the fusion processing on K channels by setting the total number of the hyperspectral image channels as K to obtain a final fusion result of the hyperspectral image and the high spatial resolution image with the K channels.
The invention has the beneficial effects that: 1. the traditional cross cortical neural network model only allows one external excitation input, and the improved model has two external excitation inputsSAndDthe method is beneficial to more conveniently applying the cross cortical neural network principle to image fusion; 2. the model of the invention can be applied to remote sensing image fusion with different scales due to the consideration of detail injection operation; 3. the model of the invention can better keep the detail characteristics of the high-spatial resolution image and greatly reduce the spectrum distortion of the fusion result.
Drawings
Fig. 1 is a flow chart of a remote sensing image fusion method of the present invention.
FIG. 2 is a diagram of a model architecture of the improved cross cortical neural network of the present invention.
FIG. 3 shows an input image and a fusion result according to an embodiment of the present invention.
Detailed Description
In order to make the technical means, the objects and the effects achieved by the present invention easily understandable, the present invention is further described below.
The flow chart of the remote sensing image fusion method of the invention is shown in figure 1, and the whole flow is that firstly, the input high spatial resolution image and multi-high spectral image are processed by [0,1 ]]Standardizing the intervals; secondly, extracting details of the standardized high-spatial-resolution images, and sending the detail images and the high-spatial-resolution images into the model of the invention as input, wherein the improved cross cortical neural network model structure of the invention is shown in figure 2, and the network parameters are set as neighborhood connection strength matrixW=[0.5,1,0.5;1,0,1;0.5,1,0.5]Coefficient of attenuationg=0.65, normalized constanth=20。
Neuron and its useijCorresponding to image pixels one by one, and obtaining the final fusion result when all neurons in the neural network are excitedFAnd respectively executing the operations on the K channels of the hyperspectrum to obtain a final K independent channel fusion result.
The input high spatial resolution gray scale image, the multi-hyperspectral image and the fusion result are respectively shown in fig. 3, wherein fig. 3(a) is an input high spatial resolution panchromatic gray scale image, fig. 3(b) is an input multi-hyperspectral image, the input image is collected in a Quickbird high-resolution sensor, the spatial resolutions are 0.7m and 2.8m respectively, fig. 3(c) is a fusion result, as can be seen from fig. 3, the remote sensing fusion method simultaneously obtains high spatial and high spectral resolutions, and details and spectral features are kept well.
Table 1 shows the evaluation and comparison results of the method of the present invention and other classic remote sensing image fusion methods such as Gram-Schmidt fusion method, brooey transformation fusion method, principal component analysis PCA fusion method, IHS fusion method, etc., wherein the comparison and evaluation indexes adopt spectral angle matching degree SAM, relative global error ERGAS and Q index indexes, and the mathematical expression of the evaluation indexes is as follows:
Figure 566631DEST_PATH_IMAGE006
Figure 258644DEST_PATH_IMAGE007
Figure 995656DEST_PATH_IMAGE008
wherein the content of the first and second substances,<>indicating an inner product operation, RMSE stands for a root mean square operation,σandμrespectively representing the covariance and the mean value of the image, wherein the spectral angle matching degree SAM in the evaluation index is the measurement of the spectral distortion of the remote sensing image, the smaller the value is, the better the fusion effect is, the relative global error ERGAS represents the detail distortion degree between the fusion result and the high spatial resolution image, the smaller the value is, the better the fusion effect is, and the Q index is the comprehensive combination of the spectral distortion and the spatial detail retention of the fusion imageThe larger the value, the better the fusion quality.
The evaluation index calculation results in table 1 show that the Q index indexes of the method of the present invention are all higher than those of other classical Gram-Schmidt fusion methods, Brovey transformation fusion methods, principal component analysis PCA fusion methods, IHS fusion methods, etc., and simultaneously, the spectral distortion index SAM and detail distortion index ERGAS of the method of the present invention are both smaller than those of other classical algorithms, which shows that the method of the present invention is greatly superior to the classical methods in the retention of spectral distortion and spatial details.
Figure 456724DEST_PATH_IMAGE009

Claims (3)

1. An improved cross cortical neural network model for remote sensing image fusion is characterized by comprising the improved cross cortical neural network model and application of the model in remote sensing image fusion.
2. The improved cross cortical neural network model for remote sensing image fusion as claimed in claim 1, wherein the improved cross cortical neural network model is specifically:
Figure 797390DEST_PATH_IMAGE001
Figure 589897DEST_PATH_IMAGE002
Figure 369634DEST_PATH_IMAGE003
wherein the content of the first and second substances,ija representation of the current neuron is presented,klthe number of the neighbor neurons is represented,nfor the current number of iterations,Wandαrespectively a neighborhood connection strength matrix and a connection coefficient,Sthe image of the plurality of high spectrums is obtained,Dfor a detailed image with a high spatial resolution,gandhare respectively an attenuation systemThe number and the normalization constant are given,Eis the value of the activity threshold value and,Yin order to output the pulses, the pulse generator is provided with a pulse generator,Findicating the output fusion result.
3. The improved cross cortical neural network model for remote sensing image fusion as claimed in claims 1 and 2, wherein the model is applied in remote sensing image fusion, and its concrete steps are;
step 1: normalizing input hyperspectral and high spatial resolution images to [0, 1%]And performing histogram matching operation to obtainS k WhereinkK is the spectral channel number, = 1, ….;
step 2: gaussian low-pass filtering satisfying modulation transfer function is carried out on the high-spatial-resolution image to obtain a detailed imageD
And step 3: for each onekThe channels each execute the improved cross-cortical neural network model of claim 2 until all neurons are fired, obtaining an outputF k
And 4, step 4: to the outputF k The pixel values are subjected to inverse normalization to obtain the final fusion result.
CN201910090285.5A 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion Active CN111507454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910090285.5A CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910090285.5A CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Publications (2)

Publication Number Publication Date
CN111507454A true CN111507454A (en) 2020-08-07
CN111507454B CN111507454B (en) 2022-09-06

Family

ID=71863783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910090285.5A Active CN111507454B (en) 2019-01-30 2019-01-30 Improved cross cortical neural network model for remote sensing image fusion

Country Status (1)

Country Link
CN (1) CN111507454B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489111A (en) * 2003-08-21 2004-04-14 上海交通大学 Remote-sensing image mixing method based on local statistical property and colour space transformation
CN101577003A (en) * 2009-06-05 2009-11-11 北京航空航天大学 Image segmenting method based on improvement of intersecting visual cortical model
JP2011090309A (en) * 2009-10-23 2011-05-06 Ana-Aeroportos De Portugal Sa Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor
CN102651132A (en) * 2012-04-06 2012-08-29 华中科技大学 Medical image registration method based on intersecting cortical model
CN103049898A (en) * 2013-01-27 2013-04-17 西安电子科技大学 Method for fusing multispectral and full-color images with light cloud
CN103177431A (en) * 2012-12-26 2013-06-26 中国科学院遥感与数字地球研究所 Method of spatial-temporal fusion for multi-source remote sensing data
CN103295201A (en) * 2013-05-31 2013-09-11 中国人民武装警察部队工程大学 Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN105160647A (en) * 2015-10-28 2015-12-16 中国地质大学(武汉) Panchromatic multi-spectral image fusion method
CN105913075A (en) * 2016-04-05 2016-08-31 浙江工业大学 Endoscopic image focus identification method based on pulse coupling nerve network
CN107341501A (en) * 2017-05-31 2017-11-10 三峡大学 A kind of image interfusion method and device based on PCNN and classification focusing technology

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489111A (en) * 2003-08-21 2004-04-14 上海交通大学 Remote-sensing image mixing method based on local statistical property and colour space transformation
CN101577003A (en) * 2009-06-05 2009-11-11 北京航空航天大学 Image segmenting method based on improvement of intersecting visual cortical model
JP2011090309A (en) * 2009-10-23 2011-05-06 Ana-Aeroportos De Portugal Sa Method to generate airport obstruction chart based on data fusion between interferometric data using synthetic aperture radar positioned in spaceborne platform and other types of data acquired by remote sensor
CN102651132A (en) * 2012-04-06 2012-08-29 华中科技大学 Medical image registration method based on intersecting cortical model
CN103177431A (en) * 2012-12-26 2013-06-26 中国科学院遥感与数字地球研究所 Method of spatial-temporal fusion for multi-source remote sensing data
CN103049898A (en) * 2013-01-27 2013-04-17 西安电子科技大学 Method for fusing multispectral and full-color images with light cloud
WO2014183259A1 (en) * 2013-05-14 2014-11-20 中国科学院自动化研究所 Full-color and multi-spectral remote sensing image fusion method
CN103295201A (en) * 2013-05-31 2013-09-11 中国人民武装警察部队工程大学 Multi-sensor image fusion method on basis of IICM (improved intersecting cortical model) in NSST (nonsubsampled shearlet transform) domain
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
CN105160647A (en) * 2015-10-28 2015-12-16 中国地质大学(武汉) Panchromatic multi-spectral image fusion method
CN105913075A (en) * 2016-04-05 2016-08-31 浙江工业大学 Endoscopic image focus identification method based on pulse coupling nerve network
CN107341501A (en) * 2017-05-31 2017-11-10 三峡大学 A kind of image interfusion method and device based on PCNN and classification focusing technology

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HONG LI 等: "Fusion of Multispectral and Panchromatic Images via Local Geometrical Similarity", 《TECHNICAL GAZETTE 》, vol. 25, no. 2, 30 April 2018 (2018-04-30), pages 546 - 552 *
ULF EKBLAD 等: "Theoretical foundation of the intersecting cortical model and its use for change detection of aircraft, cars, and nuclear explosion tests", 《SIGNAL PROCESSING》, 20 April 2004 (2004-04-20), pages 1131 - 1146 *
XIN JIN 等: "Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks", 《JOURNAL OF APPLIED REMOTE SENSING》, 16 January 2016 (2016-01-16), pages 025023 - 1 *
戴文战 等: "改进交叉视觉皮质模型的医学图像融合方法", 《计算机应用研究》, vol. 33, no. 9, 28 October 2015 (2015-10-28), pages 2852 - 2855 *
王密 等: "自适应高斯滤波与SFIM 模型相结合的全色多光谱影像融合方法", 《测绘学报》, vol. 47, no. 1, 31 January 2018 (2018-01-31), pages 82 - 90 *

Also Published As

Publication number Publication date
CN111507454B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
Wang et al. Dnu: Deep non-local unrolling for computational spectral imaging
CN108182456B (en) Target detection model based on deep learning and training method thereof
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
Ye et al. FusionCNN: a remote sensing image fusion algorithm based on deep convolutional neural networks
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN107316309B (en) Hyperspectral image saliency target detection method based on matrix decomposition
CN111696043A (en) Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN
Lin et al. Integrating model-and data-driven methods for synchronous adaptive multi-band image fusion
Gao et al. Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
CN107680081B (en) Hyperspectral image unmixing method based on convolutional neural network
CN111160392A (en) Hyperspectral classification method based on wavelet width learning system
CN114998167A (en) Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN115512192A (en) Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network
CN110940638B (en) Hyperspectral image sub-pixel level water body boundary detection method and detection system
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN114972885A (en) Multi-modal remote sensing image classification method based on model compression
CN109271874B (en) Hyperspectral image feature extraction method fusing spatial and spectral information
CN111507454B (en) Improved cross cortical neural network model for remote sensing image fusion
CN111598115B (en) SAR image fusion method based on cross cortical neural network model
Zheng et al. Deep residual spatial attention network for hyperspectral pansharpening
Foucher et al. Deep speckle noise filtering
CN113066030B (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
CN114022364A (en) Multispectral image spectrum hyper-segmentation method and system based on spectrum library optimization learning
WO2021148769A1 (en) Computer implemented method of spectral un-mixing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant