CN112699929B - Deep network multi-source spectral image fusion method for multi-supervision recursive learning - Google Patents

Deep network multi-source spectral image fusion method for multi-supervision recursive learning Download PDF

Info

Publication number
CN112699929B
CN112699929B CN202011568917.3A CN202011568917A CN112699929B CN 112699929 B CN112699929 B CN 112699929B CN 202011568917 A CN202011568917 A CN 202011568917A CN 112699929 B CN112699929 B CN 112699929B
Authority
CN
China
Prior art keywords
network
fusion
image
recursive
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011568917.3A
Other languages
Chinese (zh)
Other versions
CN112699929A (en
Inventor
肖亮
陆育达
刘鹏飞
杨劲翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202011568917.3A priority Critical patent/CN112699929B/en
Publication of CN112699929A publication Critical patent/CN112699929A/en
Application granted granted Critical
Publication of CN112699929B publication Critical patent/CN112699929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a depth network multi-source spectral image fusion method for multi-supervised recursive learning, which comprises the following steps: adopting recursive learning to form recursive residual sub-networks, and adding the output and the input of each recursive residual sub-network to be used as the input of the next recursive residual sub-network; the network consists of a pre-super-resolution module and a fusion module, the pre-super-resolution module realizes the automatic learning of the up-sampling interpolation, and the pre-super-resolution image and the multispectral image are spliced to be input as the fusion module; adopting a stacking method of a plurality of recursive residual sub-networks to establish a pre-super-resolution module and a fusion module; adopting a multi-supervised learning mode, and forming intermediate fusion images of all levels by splicing and convoluting the characteristics of the low layer, the intermediate layer and the high layer; and taking the L1 norm and the spectrum angle as two measures of a loss function, establishing a joint loss function by the intermediate fusion images and the real images at all levels, and performing end-to-end network training. Simulation experiment results prove the effectiveness of the invention on multi-far spectral image fusion.

Description

Deep network multi-source spectral image fusion method for multi-supervised recursive learning
Technical Field
The invention relates to the field of hyperspectral image fusion, in particular to a depth network multisource spectral image fusion method for multi-supervised recursive learning.
Background
In recent years, deep learning has become a research focus in the field of artificial intelligence, and has gained wide attention in the theoretical and industrial fields, and has gained a great deal of applications in the fields of pattern recognition, computer vision, natural language processing, and the like. The deep learning model is generally a neural network with a multilayer structure, multi-level feature extraction is carried out on data through multiple nonlinear transformation of the multilayer neural network, low-level to high-level hierarchical features are automatically learned, and the abstraction degree of the features is enhanced along with the increase of the number of layers. Compared with the traditional shallow machine learning model, the deep learning model has more comprehensive characteristics and overcomes the dependence of artificial design characteristics on personal experience.
Deep learning is also widely used in the field of hyperspectral image fusion, and a large number of hyperspectral image fusion models based on convolutional neural networks are proposed, such as Masi G, cozzolino D, verdoliva L, et al. The PNN network is only of a three-layer structure, and the performance cannot be improved by simply increasing the number of layers, because the model is difficult to be effectively trained when the number of network layers is increased. Residual concatenation can solve the problem of difficult training when the network is too deep, such as gradient explosion and gradient disappearance. At present, researchers have proposed many network models based on residual connection, such as [ Yuan Q, wei Y, meng X, et al. A multiscale and multicast connectivity neural network for Remote Sensing image pan-sharing [ J ]. IEEE Journal of Selected Topics in Applied Earth updates and Remote Sensing,2018,11 (3): 978-989 ]. Most of the existing network models stack a low-resolution hyperspectral image and a corresponding auxiliary source image as network input, and realize image fusion in subsequent feature extraction and mapping. Recently, a dual-channel neural network model [ Shao Z, cai J. Removal sensing image fusion with deep connected neural network [ J ]. IEEE journal of selected targets in applied targets continuously and remotes sensing,2018,11 (5): 1656-1669 ] is proposed by scholars, the method extracts spectral features and spatial features from low-resolution hyperspectral images and auxiliary source images respectively, and then the extracted features are stacked to complete feature fusion through a convolutional layer, so that a good effect is achieved.
However, most of the models are shallow in depth, and the powerful feature extraction capability and nonlinear representation capability of the deep network structure cannot be fully utilized.
Disclosure of Invention
The invention aims to provide a depth network multi-source spectral image fusion method for multi-supervised recursive learning.
The technical solution for realizing the purpose of the invention is as follows: a depth network multi-source spectral image fusion method for multi-supervised recursive learning comprises the following steps:
step one, adopting recursive learning to form recursive residual sub-networks, and adding the output and the input of each recursive residual sub-network to be used as the input of the next recursive residual sub-network;
secondly, the whole network model is composed of a pre-super-resolution module and a fusion module, the pre-super-resolution module realizes automatic learning of up-sampling interpolation, and a pre-super-resolution image and a multispectral image are spliced to be input as the fusion module;
thirdly, establishing a pre-super-resolution module and a fusion module by adopting a plurality of recursive residual sub-network stacking methods;
fourthly, adopting a multi-supervised learning mode, and forming intermediate fusion images of all levels by splicing and convoluting the characteristics of the low level, the intermediate level and the high level;
and fifthly, taking the L1 norm and the spectrum angle as two measures of the loss function, establishing a combined loss function by the intermediate fusion images and the real images at all levels, and performing end-to-end network training.
Compared with the prior art, the invention has the remarkable advantages that: (1) The recursive learning is used for constructing the network, so that the problem of overlarge network parameter scale when the deep network is applied to the field of hyperspectral images is solved, and a relatively light deep learning network can be formed; (2) The image up-sampling automatic learning is realized through the pre-super-resolution module, the spatial details of the auxiliary source image can be better fused, and the spectral distortion caused by the traditional artificial interpolation (such as bicubic interpolation) is reduced; (3) Dense connection is used in the fusion stage, and low, medium and high-level feature information is effectively utilized to realize feature multiplexing; (4) The multi-supervision end-to-end training network is used, the problem that the low-level network cannot be effectively trained when the network depth is too large is solved, meanwhile, the middle fusion image of each level is connected with the next level, the features are extracted step by step for fidelity constraint, and the multi-scale spectral feature fidelity capability is effectively enhanced; (5) The method can be applied to fusion and resolution enhancement of multispectral and hyperspectral images, can also be applied to fusion and enhancement of panchromatic and hyperspectral images, and has wide application value in multisource remote sensing fusion, ground feature classification and identification and high-resolution environment monitoring.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a block diagram of the process of the present invention.
Fig. 2 is a network structure diagram of a simulation experiment.
FIG. 3 is a graph of the test results of the present invention on a Cave dataset.
FIG. 4 is a graph of the results of the testing of the present invention on a Harvard data set.
Detailed Description
The invention provides a deep network multi-source spectral image fusion method for multi-supervision recursive learning. The method repeatedly uses one residual block to form a recursive residual sub-network, avoids the difficulty in training caused by introducing excessive parameters, and further reduces the performance. Meanwhile, the method realizes automatic learning of image up-sampling through the pre-super-resolution module, can better fuse the space details of the auxiliary source image and reduce the spectral distortion caused by the traditional artificial interpolation (such as bicubic interpolation). In addition, the method trains the network in a multi-level supervision mode, and dense connection is adopted in the fusion stage, so that the low-level and middle-level features can be effectively trained, and meanwhile, the final fusion image can be formed together with the high-level features. The method is an end-to-end multi-supervision neural network model, the input and output forms are simple in structure, pre-processing and post-processing procedures are not needed, and simulation experiments on Cave and Harvard data sets show that the model is high in robustness and can be widely applied to the engineering field. The following detailed description of the implementation of the present invention, with reference to fig. 1, includes the following steps:
in the first step, recursive learning is adopted, namely, the multi-layer networks share one residual block to form recursive residual sub-networks, and the output and the input of each recursive residual sub-network are added to be used as the input of the next recursive residual sub-network. Recording the input hyperspectral image as X ∈ Rh×w×CH, w, C represent the height, width and number of channels of X, respectively. The multispectral image is Y ∈ RH×W×cH, W, c respectively represent height, width and channel number of Y. The ith residual block input is ResiOutput is as
Figure BDA0002861925710000031
I is more than or equal to 1 and less than or equal to n, then:
Figure BDA0002861925710000032
Figure BDA0002861925710000033
Figure BDA0002861925710000034
Figure BDA0002861925710000035
wherein m and n respectively represent residual block number and total residual block number of the super-resolution module, sigma represents activation function and operator
Figure BDA0002861925710000036
Which represents the operation of a convolution with the original,
Figure BDA0002861925710000037
represents the convolution kernel parameters, k is the kernel size, u represents the number of input and output channels, bi,1∈R1×u、bi,2∈R1×uRepresenting an offset term, Fi(. Cndot.) denotes the ith residual block. Let the ith recursive subnetwork be Gi(. To) with input as Reci and output as
Figure BDA0002861925710000038
Then there are:
Figure BDA0002861925710000039
Figure BDA00028619257100000310
Figure BDA0002861925710000041
Figure BDA0002861925710000042
secondly, the network is composed of a pre-super-resolution module and a fusion module, the pre-super-resolution module realizes the automatic learning of the up-sampling interpolation, and the traditional manual interpolation is reducedSpectral distortion caused by values (such as bicubic interpolation), and the pre-super-resolution image and the multispectral image are spliced to be input as a fusion module. The pre-super-resolution module is P (-), the fusion module is Q (-), and the pre-super-resolution image is Zpre∈RH×W×cWith the fusion module input being Zin∈RH×W×(C+c)Then, there are:
Zpre=P(X)
Zin=[Zpre,Y]
wherein [ \8230 ] represents the stitching operation, and X and Y represent the hyperspectral and multispectral images, respectively.
And thirdly, establishing a pre-super-resolution module and a fusion module by adopting a plurality of recursive residual sub-network stacking methods. The output characteristic of the ith recursive sub-network of the pre-super-resolution module is recorded as Fei(x)∈Rh×w×uI is more than or equal to 1 and less than or equal to m, and the output characteristic of the jth recursion sub-network of the fusion module is Fej(y)∈RH×W×u,m+1≤j≤n,x=X,y=ZinRepresenting module inputs, then:
Figure BDA0002861925710000043
Figure BDA0002861925710000044
Figure BDA0002861925710000045
Q(y)=Rn-m(Fen(y),U)
Figure BDA0002861925710000046
wherein, G1,…,Gm,Gm+1,…,GnFor recursive sub-networks, operators
Figure BDA0002861925710000049
Representing a convolution operation, sigma is an activation function, u is the number of recursive sub-network output channels,
Figure BDA0002861925710000047
Figure BDA0002861925710000048
as convolution kernel parameters, bl∈R1×C、b1∈R1×u、b2∈R1×C、b3∈R1×uIs a bias term, Rl(x, U) denotes a one-dimensional convolution operation.
And fourthly, forming intermediate fusion images of all levels by splicing and convoluting the characteristics of all levels of the low-level, the intermediate-level and the high-level in a multi-supervised learning mode. The output characteristic of the ith recursion sub-network in the fusion stage is recorded as Fei∈RH×W×u(m +1 is not less than i and not more than n), and the corresponding output intermediate fusion image is
Figure BDA0002861925710000051
Then there are:
Figure BDA0002861925710000052
wherein, [ \8230]Indicating a splicing operation, ZpreRepresenting a pre-super-resolution image.
And fifthly, taking the L1 norm and the spectrum angle as two measures of the loss function, namely simultaneously considering the spatial information and the spectrum information, establishing a joint loss function by fusing images in the middle of each level and training the high-resolution hyperspectral images, and performing end-to-end network training. Recording the network output image as Zpred∈RH×W×CCorresponding to a real image of Ztrue∈RH×W×CThe Loss function is denoted as Loss (Z)pred,Ztrue) Then, there are:
Loss(Zpred,Ztrue)=L1Loss(Zpred,Ztrue)+SamLoss(Zpred,Ztrue)
Figure BDA0002861925710000053
Figure BDA0002861925710000054
wherein, L1Loss and SamLoss respectively represent L1Loss function and spectral angle Loss function,
Figure BDA0002861925710000055
representing the spectral vectors of the predicted and real images at positions (i, j), respectively.
If the total Loss function is Loss _ total, then:
Figure BDA0002861925710000056
Figure BDA0002861925710000057
wherein alpha epsilon (0, 1) is an equilibrium parameter.
The network has the characteristics of few parameters and deep layer number, can learn deep features step by step, realizes combined fidelity constraint of learning features at all levels, and effectively enhances the fidelity capability of multi-scale spectral features. The method can be effectively applied to the fusion and resolution enhancement of high-resolution multispectral and low-resolution hyperspectral images, can also be applied to the fusion and enhancement of panchromatic and hyperspectral images, and has wide application value in multisource remote sensing fusion, ground feature classification identification and high-resolution environmental monitoring.
The effect of the present invention can be further illustrated by the following simulation experiments:
simulation conditions
The simulation experiment adopts two groups of hyperspectral data sets, namely a Cave data set and a Harvard data set. The Cave data set comprises 32 indoor hyperspectral images, each image comprises 31 wave bands, the wavelength range is from 400nm to 700nm, and the image resolution is 512 x 512. The Harvard data set contains 50 images of the interior and exterior of a room under daylight conditions, with 31 wavelength bands, a wavelength range of 420nm to 720nm, and an image resolution of 1392 × 1040. For the Cave data set, the first 20 images were selected as the training set, and the last 12 images were selected as the test set. For the Harvard dataset, the first 30 images were selected as the training set and the last 20 images were selected as the test set. Since there are no real images on the dataset, the Wald protocol (r.carra, l.santurri, b.aiazzi, and s.baronti, "Full-scale assessment of print third through multimedia adaptation of multiscale measures," IEEE Transactions on geometry and Remote Sensing vol, 53, no.12, pp.6344-6355, 2015.) is used to generate training data, i.e. for each image, 5 × 5 gaussian kernels with mean 0 and standard deviation 2 are used to filter and then 8 times down-sampled to generate a low spatial resolution hyperspectral image. And generating a high-spatial-resolution multispectral image according to the IKONOS camera spectral response, wherein the original image is used as a real image. The training image blocks are 64 × 64 in size, and the image cropping interval is 16. The simulation experiments are all completed by adopting Python3.6+ Pythroch under a Windows10 operating system, and the network architecture used in the experiments is shown in FIG. 2.
Analysis of simulation experiment results
Fig. 3 and 4 are simulation experimental results of the method of the present invention on Cave and Harvard data sets, respectively, with a wave band =20 and a ten-fold error magnification. The (a) represents a real image, and the (b), (c), (d) and (e) respectively represent RSIFNN, PNN and MSDCNN and a result error graph of the method, so that the result is more visual, and the error is amplified by 10 times. Intuitively, the method has smaller error of the image fusion result. To further quantify the model performance, PSNR (peak signal-to-noise ratio), SAM (spectral angle), SSIM (structural similarity), ERGAS (relative error of overall dimension), RMSE (root mean square error) were used as image evaluation indices, as shown in tables 1 and 2.
TABLE 1 cave data set evaluation index results
Figure BDA0002861925710000061
Figure BDA0002861925710000071
Table 2 harvard data set evaluation index results
Figure BDA0002861925710000072
The result shows that each index of the method greatly exceeds other three classical models, and the effectiveness of the method is shown.

Claims (6)

1. A depth network multi-source spectral image fusion method for multi-supervision recursive learning is characterized by comprising the following steps:
step one, adopting recursive learning to form recursive residual sub-networks, and adding the output and the input of each recursive residual sub-network to be used as the input of the next recursive residual sub-network;
secondly, the whole network model is composed of a pre-super-resolution module and a fusion module, the pre-super-resolution module realizes automatic learning of up-sampling interpolation, and a pre-super-resolution image and a multispectral image are spliced to be input as the fusion module;
thirdly, establishing a pre-super-resolution module and a fusion module by adopting a plurality of recursive residual sub-network stacking methods;
fourthly, adopting a multi-supervised learning mode, and forming intermediate fusion images of all levels by splicing and convoluting the characteristics of the low level, the intermediate level and the high level;
and fifthly, taking the L1 norm and the spectrum angle as two measures of the loss function, establishing a combined loss function by the intermediate fusion images and the real images at all levels, and performing end-to-end network training.
2. The method for multi-source spectral image fusion in deep network based on multi-supervised recursive learning as claimed in claim 1, wherein in the first step, recursive learning is adopted, i.e. the multi-layer network shares one residual block to form a recursive residual sub-networkThe output and input of each recursive residual subnetwork are added as the input of the next recursive residual subnetwork; recording the input hyperspectral image as X ∈ Rh×w×CH, w and C respectively represent the height, width and channel number of X; the multispectral image is Y ∈ RH×W×cH, W and c respectively represent the height, width and channel number of Y; the ith residual block input is ResiOutput is as
Figure FDA0003828760970000011
Then there are:
Resi∈Rh×w×u
Figure FDA0003828760970000012
Resi∈RH×W×u
Figure FDA0003828760970000013
Figure FDA0003828760970000014
Figure FDA0003828760970000015
wherein m and n respectively represent residual block number and total residual block number of the super-resolution module, sigma represents activation function, and operator
Figure FDA0003828760970000016
Which represents the operation of a convolution with the original,
Figure FDA0003828760970000017
represents the convolution kernel parameters, k is the kernel size, u represents the number of input and output channels, bi,1∈R1×u、bi,2∈R1×uRepresents a bias term, Fi() represents the ith residual block; let the ith recursive subnetwork be Gi(. To) input is ReciOutput is
Figure FDA0003828760970000018
Then there are:
Reci∈Rh×w×u
Figure FDA0003828760970000019
Reci∈RH×W×u
Figure FDA00038287609700000110
Figure FDA0003828760970000021
Figure FDA0003828760970000022
3. the deep network multisource spectral image fusion method of unsupervised recursive learning according to claim 1, wherein in the second step, the network is composed of a pre-super-resolution module and a fusion module, the pre-super-resolution module realizes automatic learning of up-sampling interpolation, and the pre-super-resolution image and the multispectral image are spliced as input of the fusion module; the pre-super-resolution module is P (-), the fusion module is Q (-), and the pre-super-resolution image is Zpre∈RH×W×cWith the fusion module input being Zin∈RH×W×(C+c)Then, there are:
Zpre=P(X)
Zin=[Zpre,Y]
wherein [ \8230 ] represents splicing operation, and X and Y represent hyperspectral and multispectral images respectively.
4. The deep-network multi-source spectrum for multi-supervised recursive learning of claim 1The image fusion method is characterized in that in the third step, a plurality of recursive residual sub-network stacking methods are adopted to establish a pre-super-resolution module and a fusion module; the output characteristic of the ith recursive sub-network of the pre-super-resolution module is recorded as Fei(x)∈Rh×w×uI is more than or equal to 1 and less than or equal to m, and the output characteristic of the jth recursion sub-network of the fusion module is Fej(y)∈RH×W×u,m+1≤j≤n,x=X,y=ZinRepresenting module inputs, then:
Figure FDA0003828760970000023
Figure FDA0003828760970000024
Figure FDA0003828760970000025
Q(y)=Rn-m(Fen(y),U)
Figure FDA0003828760970000026
wherein G is1,…,Gm,Gm+1,…,GnFor recursing subnetworks, operators
Figure FDA0003828760970000031
Represents convolution operation, sigma is an activation function, u is the input and output channel number of the recursion sub-network,
Figure FDA0003828760970000032
W1 p∈Rk×k×u
Figure FDA0003828760970000033
is a rollParameter of product nucleus, bl∈R1×C、b1∈R1×u、b2∈R1×C、b3∈R1×uIs a bias term, Rl(x, U) denotes a convolution operation.
5. The multi-supervised recursive learning depth network multi-source spectral image fusion method according to claim 1, characterized in that in the fourth step, a multi-supervised learning mode is adopted, and each level of characteristics of a low layer, a middle layer and a high layer are spliced and convolutional-layer to form each level of middle fusion image; the output characteristic of the ith recursion sub-network in the fusion stage is recorded as Fei∈RH×W×uM +1 is not less than i and not more than n, and the corresponding output intermediate fusion image is
Figure FDA0003828760970000034
Then there are:
Figure FDA0003828760970000035
wherein, [ \8230]Indicating a splicing operation, ZpreRepresenting a pre-super-resolution image.
6. The deep network multi-source spectral image fusion method for multi-supervised recursive learning according to claim 1, characterized in that in the fifth step, L1 norm and spectral angle are taken as two measures of a loss function, and a joint loss function is established between the intermediate fusion images of each stage and the training high-resolution hyperspectral image to perform end-to-end network training; recording the network output image as Zpred∈RH×W×CCorresponding to a real image of Ztrue∈RH×W×CThe Loss function is denoted as Loss (Z)pred,Ztrue) Then, there are:
Loss(Zpred,Ztrue)=L1Loss(Zpred,Ztrue)+SamLoss(Zpred,Ztrue)
Figure FDA0003828760970000036
Figure FDA0003828760970000037
wherein, L1Loss and SamLoss respectively represent L1Loss function and spectral angle Loss function,
Figure FDA0003828760970000038
spectral vectors representing the predicted image and the real image at positions (i, j), respectively;
if the total Loss function is Loss _ total, then:
Figure FDA0003828760970000041
wherein alpha epsilon (0, 1) is an equilibrium parameter.
CN202011568917.3A 2020-12-25 2020-12-25 Deep network multi-source spectral image fusion method for multi-supervision recursive learning Active CN112699929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011568917.3A CN112699929B (en) 2020-12-25 2020-12-25 Deep network multi-source spectral image fusion method for multi-supervision recursive learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011568917.3A CN112699929B (en) 2020-12-25 2020-12-25 Deep network multi-source spectral image fusion method for multi-supervision recursive learning

Publications (2)

Publication Number Publication Date
CN112699929A CN112699929A (en) 2021-04-23
CN112699929B true CN112699929B (en) 2022-11-01

Family

ID=75511053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011568917.3A Active CN112699929B (en) 2020-12-25 2020-12-25 Deep network multi-source spectral image fusion method for multi-supervision recursive learning

Country Status (1)

Country Link
CN (1) CN112699929B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0102529D0 (en) * 2001-01-31 2001-03-21 Thales Optronics Staines Ltd Improvements relating to thermal imaging cameras
CN110428387B (en) * 2018-11-16 2022-03-04 西安电子科技大学 Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition
CN109509152B (en) * 2018-12-29 2022-12-20 大连海事大学 Image super-resolution reconstruction method for generating countermeasure network based on feature fusion

Also Published As

Publication number Publication date
CN112699929A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN109272010B (en) Multi-scale remote sensing image fusion method based on convolutional neural network
Mo et al. Fake faces identification via convolutional neural network
CN108537731B (en) Image super-resolution reconstruction method based on compressed multi-scale feature fusion network
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN110930342B (en) Depth map super-resolution reconstruction network construction method based on color map guidance
CN104751162A (en) Hyperspectral remote sensing data feature extraction method based on convolution neural network
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN113887645B (en) Remote sensing image fusion classification method based on joint attention twin network
Luo et al. Lattice network for lightweight image restoration
CN115018750B (en) Medium-wave infrared hyperspectral and multispectral image fusion method, system and medium
CN115496658A (en) Lightweight image super-resolution reconstruction method based on double attention mechanism
CN114119975A (en) Language-guided cross-modal instance segmentation method
CN113920043A (en) Double-current remote sensing image fusion method based on residual channel attention mechanism
CN112069853A (en) Two-dimensional bar code image super-resolution method based on deep learning
CN112699929B (en) Deep network multi-source spectral image fusion method for multi-supervision recursive learning
CN113689370A (en) Remote sensing image fusion method based on deep convolutional neural network
CN112149712B (en) Efficient hyperspectral remote sensing data compression and classification model construction method
CN116309227A (en) Remote sensing image fusion method based on residual error network and spatial attention mechanism
CN114581347B (en) Optical remote sensing spatial spectrum fusion method, device, equipment and medium without reference image
CN107358204B (en) Multispectral image classification method based on recoding and depth fusion convolutional network
CN115861749A (en) Remote sensing image fusion method based on window cross attention
Kong et al. GADA-SegNet: Gated attentive domain adaptation network for semantic segmentation of LiDAR point clouds
CN115496652A (en) Blind compressed image super-resolution reconstruction based on multi-scale channel pyramid residual attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant