CN115861083A - Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features - Google Patents

Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features Download PDF

Info

Publication number
CN115861083A
CN115861083A CN202310193616.4A CN202310193616A CN115861083A CN 115861083 A CN115861083 A CN 115861083A CN 202310193616 A CN202310193616 A CN 202310193616A CN 115861083 A CN115861083 A CN 115861083A
Authority
CN
China
Prior art keywords
image
hyperspectral
multispectral
fusion
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310193616.4A
Other languages
Chinese (zh)
Other versions
CN115861083B (en
Inventor
朱春宇
吴琼
张盈
巩丽玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310193616.4A priority Critical patent/CN115861083B/en
Publication of CN115861083A publication Critical patent/CN115861083A/en
Application granted granted Critical
Publication of CN115861083B publication Critical patent/CN115861083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a hyperspectral and multispectral remote sensing fusion method of multi-scale and global characteristics, which extracts residual information between a fusion image and a hyperspectral image from the hyperspectral image and the multispectral image, then injects the residual information into the hyperspectral image sampled above to obtain a fusion result, the network is divided into a spectrum holding branch and a detail injection branch, the spectrum holding branch adopts a spatial interpolation technology to interpolate the hyperspectral image into the same spatial size with the multispectral image, the detail injection branch introduces a residual multiscale convolution module and a global context module, has the function of extracting the residual information and injects the residual information into the hyperspectral image, can generate a remote sensing image with both high spectral resolution and high spatial resolution, and adds a new method for the field of remote sensing image fusion.

Description

Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features
Technical Field
The invention relates to the technical field of image processing, in particular to a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features.
Background
The high spectral resolution remote sensing image (HSI) has wide application in target identification, environment monitoring, resource investigation, vegetation inversion and the like, is limited by the sampling limit of the sensor, has low spatial resolution and inhibits the application effect of the HSI. The spectral resolution of the multispectral remote sensing image (MSI) is low, and the high spatial resolution can make up for the insufficient HSI spatial expression. Spatial and spectral information in remote sensing application have the same importance, and the fusion of HSI and MSI is an effective and common method for improving the spatial resolution of HSI, so that the remote sensing application range can be enlarged.
The fusion of the HSI and the MSI comprises a physical model and a deep learning algorithm, the deep learning algorithm is driven by data to fit nonlinear mapping between complex data through a multilayer structure, the adaptive data is more flexible, and the fusion effect is generally better than that of the physical model algorithm.
Despite good performance, the algorithm based on deep learning still has a large promotion space:
1) Most algorithms lack physical constraints and human perception interpretable degree;
2) Space context modeling and remote dependence are beneficial to global understanding of the extended remote sensing image, the existing algorithm mostly adopts a convolution mode to establish a local dependence relationship, and context information and remote dependence modeling are insufficient;
3) L2loss is still a common loss function, and the phenomenon of insufficient expression of high-frequency information of a fusion image can be generated;
4) The network convergence needs a large number of iterations, and further optimization is needed in the learning capacity.
Disclosure of Invention
The invention aims to provide a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features aiming at the defects of the prior art.
The technical scheme provided by the invention is as follows: a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features comprises the following steps:
s1: because no fusion image exists in reality, the sample construction follows the Wald protocol, the hyperspectral image is used as a label image, and the hyperspectral image and the multispectral image are respectively subjected to spatial and spectral downsampling to generate a data set required by network parameter adjustment;
s2: under the constraint of a detail injection frame, the network comprises a spectrum holding branch and a detail injection branch, and a convolutional neural network is constructed by combining a residual error multi-scale convolution module and a global context modeling module;
wherein the injection framework expression is:
Figure SMS_1
wherein F is a fusion image, k is the number of wave bands corresponding to the low-spatial-resolution hyperspectral image LR, upsamplale represents an upsampling operation based on bilinear interpolation, and g k For injecting a coefficient>
Figure SMS_2
Extracting the spatial detail information of the multispectral image HR, wherein extracting is detail extracting operation;
s3: training a hyperspectral and multispectral fusion network with multi-scale and global context characteristics by using an Adam optimization algorithm, and training by using a loss function combining content, spectrum and edge loss in a training process to obtain a fully-trained convolutional neural network model;
s4: and inputting the multispectral image to be fused and the hyperspectral image into the trained convolutional neural network model in the S3 to obtain a remote sensing image with high spatial resolution and high spectral resolution.
Preferably, the step S1 specifically includes:
according to the Wald protocol, filtering the hyperspectral image and the multispectral image by adopting Gaussian filtering, then carrying out corresponding multiple downsampling on the hyperspectral image and the multispectral image by using a bilinear interpolation method to obtain a simulated low-resolution input hyperspectral image and a multispectral image, and taking the original hyperspectral image as a reference image.
Further preferably, the step S2 specifically includes: the spectrum holding branch performs spatial up-sampling on the hyperspectral image, and the hyperspectral image is adaptively sampledSampling to the same spatial resolution as the multispectral image, wherein the process corresponds to an upsample (LR) injected into the frame, and the detail injection branch comprises initialization, feature extraction, feature fusion and spatial detail injection into four sub-networks to extract the multispectral image and the spatial injection component of the hyperspectral image
Figure SMS_3
The initialization subnetwork respectively extracts shallow features of the hyperspectral image and the multispectral image by parallel 3 x 3 convolution, and maps the hyperspectral image and the multispectral image to the same feature dimension so as to facilitate subsequent feature extraction, and the initialization subnetwork is specifically expressed as follows:
Figure SMS_4
in the formula, HSI is a hyperspectral image, MSI is a multispectral image, x and y are output of an initialization module, f is a convolution operator, size (k) is convolution kernel size k multiplied by k, bilinear is Bilinear interpolation, and r is the ratio of spatial resolution of MSI and HSI; the feature extraction sub-network adopts a residual multi-scale convolution module with two branches to extract the features of different receptive fields of the hyperspectral image and the multispectral image, and the expression is as follows:
Figure SMS_5
Where RMSC is a multiscale convolution operator, F _x For extracted HSI features, F _y For extracted MSI features, concat is a feature graph concatenation operation; the feature fusion sub-network performs feature fusion on the extracted hyperspectral and multispectral features in a form of adding corresponding elements and inputs the hyperspectral and multispectral features into the residual multi-scale convolution module and the global context module in sequence for feature fusion, and the expression is as follows:
Figure SMS_6
In the formula, F xy For the output fusion characteristics, GC is to adopt GC Block to carry out characteristic extraction, and F is the characteristics output by the GC Block; and the space detail injection sub-network performs dimensionality reduction on the obtained features through 3 multiplied by 3 convolution to obtain a space detail residual error, and injects the space detail residual error into an up-sampled hyperspectral image generated by a spectrum holding branch to generate HR-HSI, wherein the expression is as follows:
Figure SMS_7
In the formula, fusion is a Fusion image.
Further preferably, the step S3 specifically includes: when an Adam optimization algorithm is adopted to train a hyperspectral and multispectral fusion network with multi-scale and global context characteristics, a loss function is selected as a linear combination of content, spectrum and edge loss, and the specific expression is as follows:
Figure SMS_8
wherein is present>
Figure SMS_9
And &>
Figure SMS_10
Is a weight coefficient, L content For content loss, L spectral For spectral loss, L Edge Selecting coefficients for edge loss>
Figure SMS_11
The expression of content loss is>
Figure SMS_12
Fusion in formula i ,Ref i The index of the fused image and the index of the reference image are respectively the pixel value of i, and n is the total number of pixels;
the expression for spectral loss is:
Figure SMS_13
wherein is present>
Figure SMS_14
Spectral vectors of the fused image and the reference image at a pixel point (i, j) are respectively, B is the number of wave bands, and the expression of the edge loss is as follows:
Figure SMS_15
wherein LoG is the Gaussian Laplacian operator.
In summary, the invention mainly has the following beneficial effects:
1) The network is created under the physical algorithm constraint of the detail injection framework, so that the network has certain interpretability;
2) A residual multi-scale convolution module and a global context modeling module are embedded in the network, so that the network can capture multi-scale context information and remote dependency of features, and the encoding and global understanding capability of the network context features are enhanced;
3) The network uses a new loss function to enhance the spectrum and edge fidelity of the fused image, uses L1loss as content loss to control the fused image to keep consistent with a reference image in texture and tone, uses SAM as spectral loss to reduce the spectral distortion of the fused image, and uses Laplace of Gaussian (LoG) loss as edge loss to enhance the high-frequency information reconstruction of the image at the edge of the ground object;
4) Compared with the prior deep learning algorithm, the method can achieve higher fusion precision with a small amount of iteration and has better learning capability.
Drawings
FIG. 1 is a flow chart of a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features provided by the invention;
FIG. 2 is an algorithm structure diagram of a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features provided by the invention;
FIG. 3 is a fusion result chart of the present invention under Hydie with different hyperspectral and multispectral fusion algorithms;
FIG. 4 is a comparison graph of fusion quality of the network on the verification set in the training process of the deep learning algorithm of the present invention and comparison.
Detailed description of the preferred embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, a hyperspectral and multispectral remote sensing fusion method of multi-scale and global features comprises the following steps:
s1: because no ideal reference image exists in reality, the invention follows the Wald protocol when making a training sample, firstly, the invention carries out Gaussian filtering and bilinear interpolation on the hyperspectral and multispectral remote sensing images to carry out downsampling with corresponding multiplying power, the downsampled images are taken as the hyperspectral image and the multispectral image used for training, and the original hyperspectral image is taken as the reference image to generate a data set used by a training network;
s2: training a hyperspectral and multispectral remote sensing image fusion network with multi-scale and global context characteristics by adopting an Adam optimization algorithm to obtain a trained network model;
as shown in fig. 2, the network is a dual-flow input network, and the network includes a spectrum holding branch and a detail injection branch, wherein the spectrum holding branch performs spatial up-sampling on the hyperspectral image, and adaptively samples the hyperspectral image to the same spatial resolution as the multispectral image; the detail injection branch comprises four sub-networks of initialization, feature extraction, feature fusion and detail injection, the initialization sub-network is composed of two parallel 3 x 3 convolutions, the feature extraction network is composed of two parallel residual error multi-scale convolution modules, the feature fusion network firstly adds the features of the feature extraction network and then inputs the features into a residual error multi-scale convolution module (RMC block) and a global context module (GC block) for feature fusion, the detail injection network firstly uses the 3 x 3 convolution to reduce the dimension of the fused features and then injects the fused features into the up-sampled hyperspectral image to obtain the hyperspectral image with high spatial resolution;
s3: the hyper-spectral and multi-spectral fusion network with multi-scale and global context characteristics is trained by using an Adam optimization algorithm, a loss function combining content, spectrum and edge loss is adopted for training in the training process to obtain a fully-trained convolutional neural network model, and when the Adam optimization algorithm is adopted for training the network, the loss function is as follows:
Figure SMS_16
wherein +>
Figure SMS_17
And &>
Figure SMS_18
Is a weight coefficient, L content For content loss, L spectral For spectral loss, L Edge Selecting a factor for edge loss>
Figure SMS_19
The expression for content loss is:
Figure SMS_20
fusion in the formula i ,Ref i The index of the fused image and the index of the reference image are respectively the pixel value of i, and n is the total number of pixels;
the expression for the spectral loss is:
Figure SMS_21
wherein is present>
Figure SMS_22
Spectral vectors of the fused image and the reference image at a pixel point (i, j) are respectively, B is the number of wave bands, and the expression of edge loss is as follows:
Figure SMS_23
wherein LoG is a Gaussian Laplace operator;
s4: and inputting the hyperspectral image and the multispectral image to be fused into the trained network to obtain the fused hyperspectral image with high spatial resolution.
In this embodiment, fig. 2 is a hyperspectral and multispectral remote sensing image fusion network based on multi-scale and global context features, which inputs an image to be fused and outputs a fused image.
In order to evaluate the performance of the invention, a data set of the Hydie satellite is selected as an experimental object, and comparison is carried out by combining a currently popular hyperspectral and multispectral image fusion algorithm. The CNMF and the GSA are algorithms driven by physical models respectively, UDALN is based on an unsupervised deep network algorithm, SSRNET, TFNet, resTFNet and MSDCN are algorithms based on supervised deep learning, the experimental result is shown in FIG. 3, the image in the first row in FIG. 3 represents a fusion result graph of different algorithms, REF represents a reference image, and the second row represents a SAM thermodynamic diagram, wherein the lighter the color of the thermodynamic diagram is, the better the fusion effect is, and the best result is obtained visually by the fusion result of the invention.
Fig. 4 is a comparison diagram of fusion quality of the network on the verification set in the training process of the deep learning algorithm of the invention and comparison, which is highlighted, and a in the diagram is a curve of the invention. The PSNR index is larger, the two tiger-containing quality is better, the other three indexes are smaller, the fusion quality is better, and the graph shows that the method can achieve better fusion quality in a small amount of iteration, so that the learning capacity of the method is higher compared with a compared popular fusion algorithm.
The quantitative evaluation indexes of the experiment are shown in the table 1, and the table shows that the four indexes of the invention all have the best performance.
TABLE 1 comparison of fusion effects of different algorithms
Figure SMS_24
In order to evaluate the rationality of the new loss functions used in the present invention, the loss functions proposed in the present invention were compared with the commonly used L1, L2loss functions and different combinations of the loss functions of the present invention under the Hyperion sensor data set, and table 2 is the fusion quantitative evaluation under the constraints of different loss functions of the present invention. It can be seen that the loss function of the present invention performs best and that portions of the loss function of the present invention all effectively enhance the fusion quality.
TABLE 2 fusion quality of the invention under different loss functions
Figure SMS_25
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements shown in the above description and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (4)

1. A hyperspectral and multispectral remote sensing fusion method of multi-scale and global features is characterized by comprising the following steps:
s1: because no fusion image exists in reality, the sample construction follows the Wald protocol, the hyperspectral image is used as a label image, and the hyperspectral image and the multispectral image are respectively subjected to spatial and spectral downsampling to generate a data set required by network parameter adjustment;
s2: under the constraint of a detail injection frame, the network comprises a spectrum holding branch and a detail injection branch, and a convolutional neural network is constructed by combining a residual error multi-scale convolution module and a global context modeling module;
wherein the injection framework expression is:
Figure QLYQS_1
wherein F is a fusion image, k is the number of wave bands corresponding to the low-spatial-resolution hyperspectral image LR, upsamplale represents an upsampling operation based on bilinear interpolation, and g k Is injected with a factor->
Figure QLYQS_2
Extracting the spatial detail information of the multispectral image HR, wherein extracting is detail extracting operation;
s3: training a hyperspectral and multispectral fusion network with multi-scale and global context characteristics by using an Adam optimization algorithm, and training by using a loss function combining content, spectrum and edge loss in a training process to obtain a fully-trained convolutional neural network model;
s4: and inputting the multispectral image to be fused and the hyperspectral image into the trained convolutional neural network model in the S3 to obtain a remote sensing image with high spatial resolution and high spectral resolution.
2. The hyperspectral and multispectral remote sensing fusion method of multi-scale and global features according to claim 1, wherein the step S1 specifically comprises:
according to the Wald protocol, filtering the hyperspectral image and the multispectral image by adopting Gaussian filtering, then carrying out corresponding multiple downsampling on the hyperspectral image and the multispectral image by using a bilinear interpolation method to obtain a simulated low-resolution input hyperspectral image and a multispectral image, and taking the original hyperspectral image as a reference image.
3. The hyperspectral and multispectral remote sensing fusion method of multi-scale and global features according to claim 1, wherein the step S2 specifically comprises: the spectrum maintaining branch performs spatial up-sampling on the hyperspectral image, adaptively samples the hyperspectral image to the same spatial resolution as the multispectral image, the process corresponds to upsample (LR) injected into a frame, and the detail injection branch comprises initialization, feature extraction, feature fusion and spatial detail injection into four sub-networks to extract the multispectral image and spatial injection component of the hyperspectral image
Figure QLYQS_3
The initialization subnetwork respectively extracts shallow features of the hyperspectral image and the multispectral image by parallel 3 x 3 convolution, and maps the hyperspectral image and the multispectral image to the same feature dimension so as to facilitate subsequent feature extraction, and the initialization subnetwork is specifically expressed as follows:
Figure QLYQS_4
In the formula, HSI is a hyperspectral image, MSI is a multispectral image, x and y are output of an initialization module, f is a convolution operator, size (k) is convolution kernel size k multiplied by k, bilinear is Bilinear interpolation, and r is the ratio of spatial resolution of MSI and HSI;the feature extraction sub-network adopts a residual multi-scale convolution module with two branches to extract the features of different receptive fields of the hyperspectral image and the multispectral image, and the expression is as follows:
Figure QLYQS_5
Where RMSC is a multiscale convolution operator, F _x For extracted HSI features, F _y For extracted MSI features, concat is a feature graph concatenation operation; the feature fusion sub-network fuses the extracted hyperspectral and multispectral features in a form of adding corresponding elements and inputs the hyperspectral and multispectral features into the residual multi-scale convolution module and the global context module in sequence for feature fusion, and the expression is as follows:
Figure QLYQS_6
In the formula, F xy For the output fusion characteristics, GC is to adopt GC Block to carry out characteristic extraction, and F is the characteristics output by GC Block; and the space detail injection sub-network performs dimensionality reduction on the obtained features through 3 multiplied by 3 convolution to obtain a space detail residual error, and injects the space detail residual error into an up-sampled hyperspectral image generated by a spectrum holding branch to generate HR-HSI, wherein the expression is as follows:
Figure QLYQS_7
wherein Fusion is a Fusion image.
4. The hyperspectral and multispectral remote sensing fusion method of multi-scale and global features according to claim 1, wherein the step S3 specifically comprises: when an Adam optimization algorithm is adopted to train a hyperspectral and multispectral fusion network with multi-scale and global context characteristics, a loss function is selected as a linear combination of content, spectrum and edge loss, and the specific expression is as follows:
Figure QLYQS_8
wherein is present>
Figure QLYQS_9
And &>
Figure QLYQS_10
Is a weight coefficient, L content For content loss, L spectral For spectral loss, L Edge Selecting a factor for edge loss>
Figure QLYQS_11
The expression for content loss is>
Figure QLYQS_12
Fusion in the formula i ,Ref i Indexing the pixel values of i for the fused image and the reference image respectively, and taking n as the total number of pixels;
the expression for the spectral loss is:
Figure QLYQS_13
wherein, in the step (A),
Figure QLYQS_14
spectral vectors of the fused image and the reference image at a pixel point (i, j) are respectively, B is the number of wave bands, and the expression of edge loss is as follows:
Figure QLYQS_15
Wherein LoG is the Gaussian Laplacian operator. />
CN202310193616.4A 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features Active CN115861083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193616.4A CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193616.4A CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Publications (2)

Publication Number Publication Date
CN115861083A true CN115861083A (en) 2023-03-28
CN115861083B CN115861083B (en) 2023-05-16

Family

ID=85659786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193616.4A Active CN115861083B (en) 2023-03-03 2023-03-03 Hyperspectral and multispectral remote sensing fusion method for multiscale and global features

Country Status (1)

Country Link
CN (1) CN115861083B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314757A (en) * 2023-11-30 2023-12-29 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN117911830A (en) * 2024-03-20 2024-04-19 安徽大学 Global interaction hyperspectral multi-spectral cross-modal fusion method for spectrum fidelity
CN118229554A (en) * 2024-05-22 2024-06-21 西安电子科技大学杭州研究院 Implicit transducer high-multispectral remote sensing fusion method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN113129247A (en) * 2021-04-21 2021-07-16 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN113327218A (en) * 2021-06-10 2021-08-31 东华大学 Hyperspectral and full-color image fusion method based on cascade network
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network
CN115512192A (en) * 2022-08-16 2022-12-23 南京审计大学 Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network
WO2023000505A1 (en) * 2021-07-19 2023-01-26 海南大学 Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN113129247A (en) * 2021-04-21 2021-07-16 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN113327218A (en) * 2021-06-10 2021-08-31 东华大学 Hyperspectral and full-color image fusion method based on cascade network
WO2023000505A1 (en) * 2021-07-19 2023-01-26 海南大学 Two-order lightweight network panchromatic sharpening method combining guided filtering and nsct
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network
CN115512192A (en) * 2022-08-16 2022-12-23 南京审计大学 Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
F. A. KRUSE 等: "Predictive subpixel spatial/spectral modeling using fused HSI and MSI data", 《PROCEEDINGS OF SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING》 *
朱向东: "基于分解的高光谱和多光谱图像融合算法研究", 《中国优秀硕士论文全文数据库工程科技Ⅱ辑》 *
朱春宇: "基于卷积神经网络的高分辨率遥感影像变化检测", 《基础科学》 *
杜晨光 等: "半监督卷积神经网络遥感图像融合", 《电子测量与仪器学报》 *
胡建文 等: "基于深度学习的空谱遥感图像融合综述", 《自然资源遥感》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314757A (en) * 2023-11-30 2023-12-29 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN117314757B (en) * 2023-11-30 2024-02-09 湖南大学 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN117911830A (en) * 2024-03-20 2024-04-19 安徽大学 Global interaction hyperspectral multi-spectral cross-modal fusion method for spectrum fidelity
CN117911830B (en) * 2024-03-20 2024-05-28 安徽大学 Global interaction hyperspectral multi-spectral cross-modal fusion method for spectrum fidelity
CN118229554A (en) * 2024-05-22 2024-06-21 西安电子科技大学杭州研究院 Implicit transducer high-multispectral remote sensing fusion method

Also Published As

Publication number Publication date
CN115861083B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
Deng et al. Deep convolutional neural network for multi-modal image restoration and fusion
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN115861083A (en) Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN109102469B (en) Remote sensing image panchromatic sharpening method based on convolutional neural network
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
Hu et al. Pan-sharpening via multiscale dynamic convolutional neural network
CN110428387A (en) EO-1 hyperion and panchromatic image fusion method based on deep learning and matrix decomposition
CN109509160A (en) Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution
CN109636769A (en) EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN112150354B (en) Single image super-resolution method combining contour enhancement and denoising statistical prior
CN109859110A (en) The panchromatic sharpening method of high spectrum image of control convolutional neural networks is tieed up based on spectrum
CN111951164B (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN112488978A (en) Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
CN112785539B (en) Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive
Guo et al. MDFN: Mask deep fusion network for visible and infrared image fusion without reference ground-truth
Gao et al. Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
Pan et al. FDPPGAN: remote sensing image fusion based on deep perceptual patchGAN
CN112686830A (en) Super-resolution method of single depth map based on image decomposition
CN115311184A (en) Remote sensing image fusion method and system based on semi-supervised deep neural network
CN117576483B (en) Multisource data fusion ground object classification method based on multiscale convolution self-encoder
CN113887619A (en) Knowledge-guided remote sensing image fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant