CN115861083B - Hyperspectral and multispectral remote sensing fusion method for multiscale and global features - Google Patents

Hyperspectral and multispectral remote sensing fusion method for multiscale and global features Download PDF

Info

Publication number
CN115861083B
CN115861083B CN202310193616.4A CN202310193616A CN115861083B CN 115861083 B CN115861083 B CN 115861083B CN 202310193616 A CN202310193616 A CN 202310193616A CN 115861083 B CN115861083 B CN 115861083B
Authority
CN
China
Prior art keywords
image
hyperspectral
fusion
multispectral
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310193616.4A
Other languages
Chinese (zh)
Other versions
CN115861083A (en
Inventor
朱春宇
吴琼
张盈
巩丽玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310193616.4A priority Critical patent/CN115861083B/en
Publication of CN115861083A publication Critical patent/CN115861083A/en
Application granted granted Critical
Publication of CN115861083B publication Critical patent/CN115861083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a hyperspectral and multispectral remote sensing fusion method of multiscale and global features, which is characterized in that residual information between a fusion image and a hyperspectral image is extracted from the hyperspectral image and the multispectral image, then the residual information is injected into an up-sampled hyperspectral image to obtain a fusion result, a network is divided into two branches of spectrum holding and detail injection, the spectrum holding branch adopts a spatial interpolation technology to interpolate the hyperspectral image into the same spatial dimension as the multispectral image, the detail injection branch introduces a residual multiscale convolution module and a global context module, the function of the method is to extract the residual information and inject the residual information into the hyperspectral image, and a remote sensing image with hyperspectral resolution and high spatial resolution can be generated at the same time, so that a novel method is added to the field of remote sensing image fusion.

Description

Hyperspectral and multispectral remote sensing fusion method for multiscale and global features
Technical Field
The invention relates to the technical field of image processing, in particular to a hyperspectral and multispectral remote sensing fusion method of multiscale and global features.
Background
The high-spectral-resolution remote sensing image (HSI) has wide application in target identification, environment monitoring, resource investigation, vegetation inversion and the like, is limited by the lower spatial resolution of the sensor sampling limit, and inhibits the application effect of the HSI. The multispectral remote sensing image (MSI) has lower spectral resolution, and the high spatial resolution can make up for the insufficient space expression of the HSI. The spatial and spectral information in the remote sensing application has the same importance, and the fusion of the HSI and the MSI is an effective and common method for improving the spatial resolution of the HSI, so that the remote sensing application range can be increased.
The HSI and MSI fusion comprises a physical model and a deep learning algorithm, the deep learning algorithm uses data to drive nonlinear mapping between complex data fitted through a multi-layer structure, the adaptation data has more flexibility, and the fusion effect is generally superior to that of the physical model algorithm.
In spite of good performance, the deep learning-based algorithm still has a large improvement space:
1) Most algorithms lack physical constraints and human perception is lack of interpretability;
2) The spatial context modeling and the remote dependency are beneficial to expanding the global understanding of remote sensing images, and the existing algorithm mostly adopts a convolution mode to establish local dependency relationship, and the modeling of the context information and the remote dependency is insufficient;
3) L2loss is still a common loss function, and the phenomenon of insufficient expression of high-frequency information of a fusion image can be generated;
4) The network convergence needs to be subjected to a large number of iterations, and the learning ability needs to be further optimized.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a hyperspectral and multispectral remote sensing fusion method of multiscale and global features.
The technical scheme provided by the invention is as follows: a hyperspectral and multispectral remote sensing fusion method of multiscale and global features comprises the following steps:
s1: because no fusion image exists in reality, the sample construction follows the Wald protocol, the hyperspectral image is used as a label image, and the hyperspectral image and the multispectral image are respectively subjected to space downsampling to generate a data set required by network parameter adjustment;
s2: under the constraint of a detail injection framework, the network comprises two branches of spectrum maintenance and detail injection, and a convolution neural network is constructed by combining a residual error multi-scale convolution module and a global context modeling module;
wherein the injection frame expression is:
F k =upsample(LR)+g k ·δ
=upsample(LR)+g k ·Detract(HR)
where F is the fused image, k is the number of bands corresponding to the low spatial resolution hyperspectral image LR, upsamples represent upsampling based on bilinear interpolation, g k Delta is space detail information of the multispectral image HR, and defect is detail extraction operation;
s3: training a hyperspectral and multispectral fusion network of the multiscale and global context characteristics by using an Adam optimization algorithm, and training by adopting a loss function combining content, spectrum and edge loss in the training process to obtain a fully-trained convolutional neural network model;
when training a hyperspectral and multispectral fusion network of multiscale and global context features by adopting an Adam optimization algorithm, a loss function is selected as a linear combination of content, spectrum and edge loss, and the specific expression is as follows:
Loss=L Content1 L Spectral2 L Edge
wherein lambda is 1 And lambda (lambda) 2 Is a weight coefficient, L Content For content loss, L spectral For spectral loss, L Edge For edge loss, a coefficient lambda is selected 1 =λ 2 =1, the expression of content loss is:
Figure GDA0004171326650000021
fusion in i ,Ref i Respectively fusing the pixel values with the index of i of the image and the index of the reference image, wherein n is the total number of pixels;
the expression of spectral loss is:
Figure GDA0004171326650000031
wherein Fusion ij =(Fusion ij1 ,Fusion ij2 ,…,Fusion ijB ),Ref ij =(Ref ij1 ,Ref ij2 ,…,Ref ijB ) Spectral vectors of the fused image and the reference image at pixel points (i, j), B is the number of bands, and the expression of edge loss is:
L Edge =L1loss(LoG(Fusion),LoG(Ref))
wherein LoG is a laplace operator;
s4: and (3) inputting the multispectral image and the hyperspectral image to be fused into the convolutional neural network model trained in the step (S3) to obtain the remote sensing image with high space and high spectral resolution.
Preferably, the step S1 specifically includes:
according to Wald protocol, firstly, the hyperspectral image and the multispectral image are filtered by adopting Gaussian filtering, then, the hyperspectral image and the multispectral image are subjected to corresponding multiple downsampling by adopting a bilinear interpolation method, the hyperspectral image and the multispectral image are used as simulated hyperspectral images and multispectral images which are input with low resolution, and the original hyperspectral image is used as a reference image.
Further preferably, the step S2 specifically includes:
the spectral preserving branch performs spatial up-sampling on the hyperspectral image, adaptively samples the hyperspectral image to the same spatial resolution as the hyperspectral image, the process corresponds to upsample (LR) in the injection framework, the detail injection branch comprises four sub-networks of initialization, feature extraction, feature fusion and spatial detail injection, and the multispectral image and the spatial injection component g of the hyperspectral image are extracted k δ, wherein the initialization sub-network extracts shallow features of the hyperspectral image and the multispectral image, respectively, by 3 x 3 convolutions in parallel and maps the hyperspectral image and the multispectral image to the same feature dimensions for subsequent feature extraction, specifically expressed as:
x=PReLU(f size(3) (Bilinear(HSI,r)))
y=PReLU(f size(3) (MSI))
wherein HSI is a hyperspectral image, MSI is a multispectral image, x, y are the output of an initialization module, f is a convolution operator, size (k) is the convolution kernel size k×k, bilinear is Bilinear interpolation, and r is the ratio of MSI to HSI spatial resolution; the characteristic extraction sub-network adopts a residual error multi-scale convolution module with two branches to extract the characteristics of different receptive fields of hyperspectral images and multispectral images, and the expression is as follows:
Figure GDA0004171326650000041
Figure GDA0004171326650000042
wherein RMSC is a multi-scale convolution operator, F _x For the extracted HSI features, F _y For the extracted MSI features, concat is a feature map series operation; the characteristic fusion sub-network carries out characteristic fusion on the extracted hyperspectral and multispectral characteristics in a corresponding element addition mode, and sequentially inputs the characteristic fusion sub-network into a residual error multiscale convolution module and a global context module to carry out characteristic fusion, wherein the expression is as follows:
F xy =F _x +F _y
Figure GDA0004171326650000043
wherein F is xy GC is feature extraction by adopting GC Block, and F is feature output by the GC Block; the space detail injection sub-network performs dimension reduction on the obtained features through 3X 3 convolution to obtain space detail residual errors, and injects the space detail residual errors into an up-sampled hyperspectral image generated by a spectrum maintaining branch to generate HR-HSI, wherein the expression is as follows:
Fusion=Bilinear(HSI,r)+f size(3) (FE)
where Fusion is the fused image.
In summary, the invention has the following advantages:
1) The network is created under the constraint of a physical algorithm of a detail injection framework, so that the network has a certain interpretability;
2) The residual multi-scale convolution module and the global context modeling module are embedded in the network, so that the network can capture multi-scale context information and remote dependency of the features, and network context feature coding and global understanding capability are enhanced;
3) The network uses a new loss function to enhance the spectrum and edge fidelity of the fusion image, takes L1loss as content loss to control the fusion image to keep consistent with the reference image in texture and tone, takes SAM as spectrum loss to reduce the spectrum distortion of the fusion image, and takes Laplace of Gaussian (Log) loss as edge loss to enhance the reconstruction of the image at high-frequency information such as the edge of the ground object.
4) Compared with the prior deep learning algorithm, the method can achieve higher fusion accuracy with a small amount of iteration, and has better learning ability.
Drawings
FIG. 1 is a flow chart of a hyperspectral and multispectral remote sensing fusion method of a multiscale and global feature provided by the invention;
FIG. 2 is a diagram of an algorithm of a hyperspectral and multispectral remote sensing fusion method of a multiscale and global feature provided by the invention;
FIG. 3 is a graph showing the fusion result of the hyperspectral and multispectral fusion algorithm under Hydice;
fig. 4 is a graph of quality comparison of fusion of a network on a verification set in the training process of the deep learning algorithm of the invention and comparison.
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a hyperspectral and multispectral remote sensing fusion method of a multiscale and global feature includes the following steps:
s1: as in reality there is no ideal reference image. Therefore, the invention makes training samples follow Wald protocol, firstly, the invention carries out Gaussian filtering and bilinear interpolation on hyperspectral and multispectral remote sensing images to carry out downsampling of corresponding multiplying power, uses the downsampled images as hyperspectral images and multispectral images used for training, and uses the original hyperspectral images as reference images to generate a data set used by a training network.
S2: and training the hyperspectral and multispectral remote sensing image fusion network with the multiscale and global context characteristics by adopting an Adam optimization algorithm to obtain a trained network model.
The network is a double-flow input network, and comprises a spectrum maintaining branch and a detail injection branch, wherein the spectrum maintaining branch is used for carrying out spatial up-sampling on a hyperspectral image and adaptively sampling the hyperspectral image to the same spatial resolution as the multispectral image; the detail injection branch comprises four sub-networks, namely an initialization sub-network, a feature extraction sub-network, a feature fusion sub-network and a detail injection sub-network, wherein the initialization sub-network is composed of two parallel 3X 3 convolutions, the feature extraction sub-network is composed of two parallel residual multi-scale convolutions, the feature fusion sub-network firstly adds the features of the feature extraction sub-network, then inputs the features into a residual multi-scale convolutions module (RMC block) and a global context module (GC block) to perform feature fusion, and the detail injection sub-network firstly uses the 3X 3 convolutions to perform dimension reduction on the fused features and then injects the features into an up-sampled hyperspectral image to obtain a hyperspectral image with high spatial resolution.
S3: training a hyperspectral and multispectral fusion network with multiscale and global context characteristics by using an Adam optimization algorithm, wherein a loss function combining content, spectrum and edge loss is adopted for training in the training process, a fully-trained convolutional neural network model is obtained, and when the Adam optimization algorithm is adopted for training the network, the loss function is as follows:
Loss=L Content1 L Spectral2 L Edge
wherein lambda is 1 And lambda (lambda) 2 Is a weight coefficient, L Content For content loss, L Spectral For spectral loss, L Edge For edge loss, a coefficient lambda is selected 1 =λ 2 =1. The expression of content loss is:
Figure GDA0004171326650000061
fusion in i ,Ref i The pixel values of the fused image and the reference image are respectively indexed as i, n is the total number of pixels, and the expression of the spectrum loss is as follows:
Figure GDA0004171326650000071
wherein Fusion ij =(Fusion ij1 ,Fusion ij2 ,…,Fusion ijB ),Ref ij =(Ref ij1 ,Ref ij2 ,…,Ref ijB ) The spectral vectors of the fusion image and the reference image at the pixel points (i, j) are respectively, and B is the band number. The expression of edge loss is:
L Edge =L1loss(LoG(Fusion),LoG(Ref))
where LoG is the laplace operator.
S4: and inputting the hyperspectral image to be fused and the multispectral image into a trained network to obtain the fused hyperspectral image with high spatial resolution.
In this embodiment, fig. 2 is a hyperspectral and multispectral remote sensing image fusion network based on the multiscale and global context features, which is input as an image to be fused and output as a fused image.
To evaluate the performance of the present invention, the dataset of the Hydice satellite was chosen as the subject and compared with the currently popular hyperspectral and multispectral image fusion algorithm. Wherein CNMF and GSA are respectively algorithms driven by a physical model, UDALN is an unsupervised depth network algorithm, SSRNET, TFNet, resTFNet and MSDCNN are supervised deep learning algorithms, experimental results are shown in FIG. 3, the first row of images in FIG. 3 represent fusion result graphs of different algorithms, REF represents a reference image, and the second row represents a SAM thermodynamic diagram, wherein the lighter the color of the thermodynamic diagram is, the better the fusion effect is, and the best visual result can be seen from the fusion result of the invention.
Fig. 4 is a graph showing the fusion quality of the network on the verification set in the training process of the deep learning algorithm of the invention and the comparison, wherein a is a curve of the invention. The larger PSNR index shows that the better the quality of the two tigers is, the smaller the other three indexes show that the better the fusion quality is, and the better fusion quality can be achieved in a small number of iterations, so that the learning ability of the method is higher compared with that of a compared popular fusion algorithm.
The quantitative evaluation indexes of the experiment are shown in table 1, and the four indexes of the invention all show the best performance.
Table 1 comparison of fusion effects of different algorithms
Figure GDA0004171326650000083
To evaluate the rationality of the new loss functions used in the present invention, the proposed loss functions of the present invention were compared with commonly used L1, L2loss functions, and different combinations of the loss functions of the present invention under the Hyperion sensor dataset, and Table 2 is a quantitative evaluation of fusion under the constraints of the different loss functions of the present invention. It can be seen that the loss function of the present invention performs best and that portions of the loss function of the present invention are effective in enhancing the fusion quality.
TABLE 2 fusion quality of the invention at different loss functions
Figure GDA0004171326650000082
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise construction shown and described above, and that various modifications and changes may be effected therein without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (3)

1. The hyperspectral and multispectral remote sensing fusion method of the multiscale and global features is characterized by comprising the following steps of:
s1: because no fusion image exists in reality, the sample construction follows the Wald protocol, the hyperspectral image is used as a label image, and the hyperspectral image and the multispectral image are respectively subjected to space downsampling to generate a data set required by network parameter adjustment;
s2: under the constraint of a detail injection framework, the network comprises two branches of spectrum maintenance and detail injection, and a convolution neural network is constructed by combining a residual error multi-scale convolution module and a global context modeling module;
wherein the injection frame expression is:
F k =upsample(LR)+g k ·δ
=upsample(LR)+g k ·Detract(HR)
wherein F is a fusion image, k is the number of wave bands corresponding to the low-spatial resolution hyperspectral image LR, upsamples represent upsampling operation based on bilinear interpolation, gk is an injection coefficient, delta is spatial detail information of the multispectral image HR, and defect is detail extraction operation;
s3: training a hyperspectral and multispectral fusion network of the multiscale and global context characteristics by using an Adam optimization algorithm, and training by adopting a loss function combining content, spectrum and edge loss in the training process to obtain a fully-trained convolutional neural network model;
when training a hyperspectral and multispectral fusion network of multiscale and global context features by adopting an Adam optimization algorithm, a loss function is selected as a linear combination of content, spectrum and edge loss, and the specific expression is as follows:
Loss=L Content1 L spectral2 L Edge
wherein lambda is 1 And lambda (lambda) 2 Is a weight coefficient, L content For content loss, L spectral For spectral loss, L Edge For edge loss, a coefficient lambda is selected 1 =λ 2 =1, the expression of content loss is:
Figure QLYQS_1
Fusion in i ,Ref i Respectively fusing the pixel values with the index of i of the image and the index of the reference image, wherein n is the total number of pixels;
the expression of spectral loss is:
Figure QLYQS_2
wherein Fusion ij =(Fusion ij1 ,Fusion ij2 ,...,Fusion ijB ),Ref ij =(Ref ij1 ,Ref ij2 ,...,Ref ijB ) Spectral vectors of the fused image and the reference image at pixel points (i, j), B is the number of bands, and the expression of edge loss is:
L Edge =L1loss(LoG(Fusion),LoG(Ref))
wherein LoG is a laplace operator;
s4: and (3) inputting the multispectral image and the hyperspectral image to be fused into the convolutional neural network model trained in the step (S3) to obtain the remote sensing image with high space and high spectral resolution.
2. The method for remote sensing fusion of hyperspectral and multispectral features of a multiscale and global feature according to claim 1, wherein the step S1 specifically comprises:
according to Wald protocol, firstly, the hyperspectral image and the multispectral image are filtered by adopting Gaussian filtering, then, the hyperspectral image and the multispectral image are subjected to corresponding multiple downsampling by adopting a bilinear interpolation method, the hyperspectral image and the multispectral image are used as simulated hyperspectral images and multispectral images which are input with low resolution, and the original hyperspectral image is used as a reference image.
3. The method for remote sensing fusion of hyperspectral and multispectral features of the hyperspectral and multispectral features of claim 1, wherein the step S2 specifically comprises:
the spectral preserving branch performs spatial up-sampling on the hyperspectral image, adaptively samples the hyperspectral image to the same spatial resolution as the hyperspectral image, the process corresponds to upsample (LR) in the injection framework, the detail injection branch comprises four sub-networks of initialization, feature extraction, feature fusion and spatial detail injection, and the multispectral image and the spatial injection component g of the hyperspectral image are extracted k δ, wherein the initialization sub-network extracts shallow features of the hyperspectral image and the multispectral image, respectively, by 3 x 3 convolutions in parallel and maps the hyperspectral image and the multispectral image to the same feature dimensions for subsequent feature extraction, specifically expressed as:
x=PReLU(f size(3) (Bilinear(HSI,r)))
y=PReLU(f size(3) (MSI))
wherein HSI is a hyperspectral image, MSI is a multispectral image, x, y are the output of an initialization module, f is a convolution operator, size (k) is the convolution kernel size k×k, bilinear is Bilinear interpolation, and r is the ratio of MSI to HSI spatial resolution; the characteristic extraction sub-network adopts a residual error multi-scale convolution module with two branches to extract the characteristics of different receptive fields of hyperspectral images and multispectral images, and the expression is as follows:
Figure QLYQS_3
Figure QLYQS_4
wherein RMSC is a multi-scale convolution operator, F _x For the extracted HSI features, F _y For the extracted MSI features, concat is a feature map series operation; the characteristic fusion sub-network carries out characteristic fusion on the extracted hyperspectral and multispectral characteristics in the form of adding corresponding elements, and inputs the characteristic fusion sub-network into a residual multi-scale convolution module and a global context module in sequence for characteristic fusion,the expression is:
F xy =F _x +F _y
Figure QLYQS_5
wherein F is xy GC is feature extraction by adopting GC Block, and F is feature output by the GC Block; the space detail injection sub-network performs dimension reduction on the obtained features through 3X 3 convolution to obtain space detail residual errors, and injects the space detail residual errors into an up-sampled hyperspectral image generated by a spectrum maintaining branch to generate HR-HSI, wherein the expression is as follows:
Fusion=Bilinear(HSI,r)+f size(3) (FE)
where Fusion is the fused image.
CN202310193616.4A 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features Active CN115861083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193616.4A CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193616.4A CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Publications (2)

Publication Number Publication Date
CN115861083A CN115861083A (en) 2023-03-28
CN115861083B true CN115861083B (en) 2023-05-16

Family

ID=85659786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193616.4A Active CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Country Status (1)

Country Link
CN (1) CN115861083B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314757B (en) * 2023-11-30 2024-02-09 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533620B (en) * 2019-07-19 2021-09-10 西安电子科技大学 Hyperspectral and full-color image fusion method based on AAE extraction spatial features
CN113129247B (en) * 2021-04-21 2023-04-07 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN113327218B (en) * 2021-06-10 2023-08-25 东华大学 Hyperspectral and full-color image fusion method based on cascade network
CN113643197B (en) * 2021-07-19 2023-06-20 海南大学 Two-order lightweight network full-color sharpening method combining guided filtering and NSCT
CN115512192A (en) * 2022-08-16 2022-12-23 南京审计大学 Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network

Also Published As

Publication number Publication date
CN115861083A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
CN110660038B (en) Multispectral image and full-color image fusion method based on generation countermeasure network
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN109102469B (en) Remote sensing image panchromatic sharpening method based on convolutional neural network
CN112287978A (en) Hyperspectral remote sensing image classification method based on self-attention context network
Hu et al. Pan-sharpening via multiscale dynamic convolutional neural network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN109636769A (en) EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN112150354B (en) Single image super-resolution method combining contour enhancement and denoising statistical prior
CN115861083B (en) Hyperspectral and multispectral remote sensing fusion method for multiscale and global features
CN110060225B (en) Medical image fusion method based on rapid finite shear wave transformation and sparse representation
CN113327218A (en) Hyperspectral and full-color image fusion method based on cascade network
CN111951164A (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN112347945A (en) Noise-containing remote sensing image enhancement method and system based on deep learning
CN111008936B (en) Multispectral image panchromatic sharpening method
CN112785539A (en) Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
Guo et al. MDFN: Mask deep fusion network for visible and infrared image fusion without reference ground-truth
CN113744134A (en) Hyperspectral image super-resolution method based on spectrum unmixing convolution neural network
CN113240581A (en) Real world image super-resolution method for unknown fuzzy kernel
CN116883799A (en) Hyperspectral image depth space spectrum fusion method guided by component replacement model
CN111882512A (en) Image fusion method, device and equipment based on deep learning and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant