CN113793261A - Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network - Google Patents

Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network Download PDF

Info

Publication number
CN113793261A
CN113793261A CN202110897604.0A CN202110897604A CN113793261A CN 113793261 A CN113793261 A CN 113793261A CN 202110897604 A CN202110897604 A CN 202110897604A CN 113793261 A CN113793261 A CN 113793261A
Authority
CN
China
Prior art keywords
image
feature
layer
channel
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110897604.0A
Other languages
Chinese (zh)
Inventor
孙帮勇
喻梦莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110897604.0A priority Critical patent/CN113793261A/en
Publication of CN113793261A publication Critical patent/CN113793261A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a spectrum reconstruction method based on a 3D attention mechanism full-channel fusion network, which comprises the following steps: step 1, constructing a shallow feature extraction module, inputting an original RGB image, and outputting a feature image I1; step 2, constructing a hyperspectral feature generation module, and inputting the feature image I1 obtained in the step 1; the output is a feature image I2; step 3, constructing a reconstruction module, and inputting a characteristic image I2; the output is a hyperspectral image I6 after spectral reconstruction; the reconstruction module is mainly used for restoring the characteristic image I2 extracted by the hyperspectral characteristic generation module to an image I6 with higher spectral resolution corresponding to the original RGB image; and 4, optimizing the spectrum reconstruction network to obtain the target. The method can effectively realize end-to-end mapping from the RGB image to the hyperspectral image, and adaptively learn the characteristic response between channels and in the channels, thereby enhancing the characteristic expression capability of the network.

Description

Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network
Technical Field
The invention belongs to the technical field of hyperspectral image processing, and relates to a spectrum reconstruction method based on a 3D attention mechanism full-channel fusion network.
Background
The hyperspectral image is a three-dimensional data cube, different from the traditional RGB image, the hyperspectral image comprises the spatial characteristics and the spectral characteristics of hundreds or thousands of continuous spectrums, and the abundant spectral detail characteristics can be applied to a plurality of computer vision fields, such as face recognition, medical image processing, target tracking, anomaly detection and the like. However, a scanning-based hyperspectral acquisition system often requires a large amount of exposure time when capturing data, and the imaging method of obtaining hyperspectral resolution by sacrificing time resolution seriously hinders the application of hyperspectral images. To solve this problem, snapshot hyperspectral imaging devices based on compressed sensing have been developed, however, the systems of these devices are very complex in terms of both hardware implementation and reconstruction algorithms, and are very expensive. Due to the limitations of scanning and snapshot hyperspectral imaging systems, as an alternative solution, spectral reconstruction from RGB images has attracted extensive attention and research, i.e. recovering hyperspectral images with consistent spatial resolution and higher spectral resolution from a given RGB image, also known as spectral super resolution or spectral reconstruction.
Existing spectral reconstruction methods can be roughly divided into two categories, based on traditional methods and methods based on deep learning. The traditional method is used for learning the three-to-many mapping relation from the RGB image to the hyperspectral image through sparse recovery, low-rank tensor recovery and some shallow mapping models. However, such methods often need various kinds of prior information depending on the hyperspectral data, and the generalization capability of the algorithms is poor. In recent years, with the rapid development of deep learning, a spectral reconstruction method based on a Convolutional Neural Network (CNN) is widely used. Although the CNN-based method makes up the shortcomings of the conventional method to some extent and significantly improves the accuracy of spectral reconstruction, some drawbacks still exist. On one hand, most of the existing spectrum reconstruction models based on the CNN treat all input feature information equally, neglect that different feature information has different spectrum resolutions, and the contribution of the feature information to feature fusion is unequal, so that the expression capability of the CNN is limited. On the other hand, the spatial attention mechanism adopted by the spectrum reconstruction model based on the CNN mainly focuses on the scale information of the features, and the channel dimension information is rarely focused, but the adopted channel attention mechanism ignores the scale information and cannot focus on the channel-space information at the same time, so that the reconstruction performance of the spectrum reconstruction algorithm is limited to a certain extent.
Disclosure of Invention
The invention aims to provide a spectral reconstruction method based on a 3D (three-dimensional) attention mechanism full-channel fusion network, which solves the problems that fusion of different feature information in a spectral reconstruction algorithm in the prior art contributes the same to image reconstruction and channel-space information cannot be focused at the same time.
The technical scheme adopted by the invention is that a spectrum reconstruction method based on a 3D attention mechanism full-channel fusion network is specifically implemented according to the following steps:
step 1, constructing a shallow layer feature extraction module,
the input of the shallow feature extraction module is an original RGB image with the size of 512 x 482 x 3, and the output of the shallow feature extraction module is a feature image I1 with the size of 256 x 64;
step 2, constructing a hyperspectral characteristic generation module,
the input of the hyperspectral feature generation module is the feature image I1 obtained in the step 1, and the size of the feature image I1 is 256 × 64; the output of the hyperspectral feature generation module is a feature image I2 with the weight fusion of different feature information, and the size of the feature image is also 256 × 64;
step 3, constructing a reconstruction module,
the input data of the reconstruction module is the feature image I2 output in step 2, with size 256 × 64; the output of the reconstruction module is a spectrally reconstructed hyperspectral image I6 with size 256 × 31; the reconstruction module is mainly used for restoring the characteristic image I2 extracted by the hyperspectral characteristic generation module to an image I6 with higher spectral resolution corresponding to the original RGB image;
and 4, optimizing the spectrum reconstruction network to obtain the target.
The method has the advantages that end-to-end mapping from the RGB image to the hyperspectral image can be effectively realized through the provided 3D attention mechanism-based full-channel fusion network, and the characteristic responses among and in the channels can be self-adaptively learned through the interdependency of the channel direction and the spatial characteristic of the 3D attention mechanism modeling, so that the characteristic expression capability of the network is enhanced.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a flow chart of the structure of a shallow feature extraction module constructed by the method of the present invention;
FIG. 3 is a flow chart of the structure of a double residual channel-space attention mechanism block constructed by the method of the present invention;
FIG. 4 is a flow chart of the structure of a 3D channel-space attention mechanism module constructed by the method of the present invention;
FIG. 5 is a flow chart of the structure of a reconstruction module constructed by the method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, the method of the present invention is based on a spectrum reconstruction network (hereinafter referred to as spectrum reconstruction network) of a 3D attention mechanism full-channel weighted fusion network, and comprises a shallow layer feature extraction module, a hyperspectral feature generation module, and a reconstruction module. The shallow feature extraction module comprises a Patch crop, an LN regularization layer and a Conv3 x 3 part, and the shallow information extraction is realized by directly taking an original RGB image as input. The hyperspectral feature generation module mainly comprises 3 stacked double residual channels-space attention mechanism blocks, and hyperspectral features are obtained by weighting and fusing different feature maps generated by the three double residual channels-space attention mechanism blocks. The reconstruction module is used for finally outputting a hyperspectral image with the dimension of 31.
The method of the invention utilizes the spectrum to reconstruct the network framework and is implemented according to the following steps:
step 1, constructing a shallow feature extraction module, wherein the input of the shallow feature extraction module is an original RGB image with the size of 512 × 482 × 3, and the output of the shallow feature extraction module is a feature image I1 with the size of 256 × 64; the shallow feature extraction module is mainly used for performing block clipping on an input original RGB image, then performing normalization processing on each clipping block, and then extracting shallow feature information through convolution to obtain a feature image I1.
Referring to fig. 2, the shallow feature extraction module sequentially has the following structures: an original RGB image (input) as an input image → a pitchcrop layer → an LN regularization layer → a Conv3 layer → an output feature image I1(output feature); the Patchcrop layer randomly crops the original RGB image into 256 × 3 cropped blocks for data augmentation by 512 × 482 × 3 so as to enhance the stability of the spectral reconstruction network; the LN regularization layer is used for carrying out normalization processing on input data, and the input data is reduced to 0-1, so that the convergence speed of the spectral reconstruction network is increased; con3 × 3 layers are convolution operations, the convolution kernel size is 3 × 3, the convolution step size is 1, the padding value is 0, and the number of feature maps is 64.
Step 2, constructing a hyperspectral feature generation module, wherein the input of the hyperspectral feature generation module is the feature image I1 obtained in the step 1, and the size of the hyperspectral feature generation module is 256 × 64; the output of the hyperspectral feature generation module is a feature image I2 weighted and fused with different feature information, and the size of the feature image I2 is also 256 × 64.
The method mainly considers that different feature information contributes differently to spectrum reconstruction, so that an additional weight is added to each input in the feature fusion period, and the network learns the importance of each input feature. The hyperspectral feature generation module performs weighting fusion on different feature information by using the feature image I1 output in the step 1 and 3 double residual channels-space attention mechanism blocks, and the used weight parameters are lambda1、λ2、λ3、λ4And the fused parameters can be obtained by network learning, and are not directly added by concat.
The double-residual channel-space attention mechanism block mainly comprises a double-residual structure and a 3D channel-space attention mechanism module (3D-CSAB), wherein the double-residual structure is formed by long and short jump connection to form double-residual learning, so that the problems of gradient loss and explosion in a deep network can be well solved while the rich shallow characteristic information of an original RGB image is fully utilized; the 3D channel-space attention mechanism module can extract strong features to describe information between channels and in the channels of the continuous channels, makes full use of the interdependence relation between the feature channels, and greatly enhances feature correlation learning.
Referring to fig. 3, the structure of the first dual residual channel-spatial attention mechanism block is, in order: the characteristic image I1 obtained in step 1 serves as an input characteristic → Conv13 × 3 layer → pralu (activation function) layer → Conv23 × 3 layer → residual structural layer → pralu (activation function) layer → Conv33 × 3 layer → 3D-CSAB layer → residual structural layer → output as the characteristic image I3; wherein each Conv3 × 3 layer is convolution operation, the convolution kernel size is 3 × 3, the convolution step is 1, the padding value is 0, and the feature mapping number is 64, where an output feature image of the Conv33 × 3 layer is referred to as I4; two PReLu as activation functions to introduce more non-linearity and speed up convergence; the two residual error structural layers are mainly subjected to residual error connection, so that the problem of gradient disappearance is solved; the 3D-CSAB layer generates an attention map by capturing the joint channel and the spatial feature by using the 3D convolution layer, and then calculates the product of the input feature before the 3D convolution and the generated attention map to obtain a new feature image I5.
Referring to fig. 4, the structure of the 3D-CSAB layer is as follows: feature image I4 output by the Conv33 layer 3 in the first dual residual channel-space attention mechanism block as input → 3D Conv layer → Sigmoid (active function) layer → channel-space attention map (channel-space attention) layer → output as feature image I5 (output of the 3D-CSAB layer in the first dual residual channel-space attention mechanism block); the 3D Conv layer is convolution operation, the convolution kernel size is 3 x 3, the convolution step size is 1, the filling value is 1, and the feature mapping number is 64; the Sigmoid (activation function) layer generates an attention weight graph from 0 to 1 between channel-spatial; the role of the channel-space attention map (channel-spatial attentionweight) layer is to encode the positions needing attention or suppression, and the positions needing attention are closer to 1, and the positions needing suppression are closer to 0; the channel-space attention map is then multiplied by the input feature image I4 by pixel-by-pixel multiplication to obtain a new output feature image I5 (where e represents pixel-by-pixel multiplication), the size of the feature image I5 being 256 × 64.
The input feature image I4 and the output feature image I5 of the 3D-CSAB layer are transition feature images forming the feature image I3 and are also indispensable parts for forming the feature image I3, because the feature image I5 can ensure that the feature weights learned in the spectral reconstruction network are different, the position weight to be paid attention is large, and the position weight to be suppressed is small.
Referring to fig. 1, the feature image I3 output from the first dual residual channel-spatial attention mechanism block is used as the input of the second dual residual channel-spatial attention mechanism block, and the output from the second dual residual channel-spatial attention mechanism block is used as the input of the third dual residual channel-spatial attention mechanism block, so as to continuously enhance the learning capability of the spectral reconstruction network on the features by stacking three structurally identical dual residual channel-spatial attention mechanism blocks, and generate the feature image I2 by weighting and fusing the feature image I1 and 3 dual residual channel-spatial attention mechanism blocks.
Step 3, constructing a reconstruction module,
the input data of the reconstruction module is the feature image I2 output in step 2, with size 256 × 64; the output of the reconstruction module is a spectrally reconstructed hyperspectral image I6 with size 256 × 31; the main function of the reconstruction module is to restore the feature image I2 extracted by the hyperspectral feature generation module to an image I6 with higher spectral resolution corresponding to the original RGB image.
Referring to fig. 5, the structure of the reconstruction module is sequentially: the feature image I2 Output in step 2 is used as Input feature → Conv1 × 1 layer → Output as image I6(Output image); wherein, Conv1 × 1 layer is convolution operation, convolution kernel size is 1 × 1, convolution step is 1, padding value is 0, and feature mapping number is 31.
Step 4, optimizing the spectrum reconstruction network,
the data set adopted during the spectrum reconstruction network training is the paired RGB images and the real hyperspectral images, the reconstruction loss of the hyperspectral image I6 generated by the input image (original RGB image) and the corresponding real hyperspectral image can be calculated, and the spectrum reconstruction network provided by continuous optimization is obtained through the minimum loss in the network training.

Claims (5)

1. A spectrum reconstruction method based on a 3D attention mechanism full-channel fusion network is characterized by comprising the following steps:
step 1, constructing a shallow layer feature extraction module,
the input of the shallow feature extraction module is an original RGB image with the size of 512 x 482 x 3, and the output of the shallow feature extraction module is a feature image I1 with the size of 256 x 64;
step 2, constructing a hyperspectral characteristic generation module,
the input of the hyperspectral feature generation module is the feature image I1 obtained in the step 1, and the size of the feature image I1 is 256 × 64; the output of the hyperspectral feature generation module is a feature image I2 with the weight fusion of different feature information, and the size of the feature image is also 256 × 64;
step 3, constructing a reconstruction module,
the input data of the reconstruction module is the feature image I2 output in step 2, with size 256 × 64; the output of the reconstruction module is a spectrally reconstructed hyperspectral image I6 with size 256 × 31; the reconstruction module is mainly used for restoring the characteristic image I2 extracted by the hyperspectral characteristic generation module to an image I6 with higher spectral resolution corresponding to the original RGB image;
and 4, optimizing the spectrum reconstruction network to obtain the target.
2. The spectral reconstruction method based on the 3D attention mechanism full-channel fusion network according to claim 1, wherein in the step 1, the shallow feature extraction module performs block clipping on an input original RGB image, then performs normalization processing on each clipped block, and performs extraction of shallow feature information through convolution to obtain a feature image I1;
the shallow feature extraction module has the following structure in sequence: the original RGB image as the input image → batch crop layer → LN regularization layer → Conv3 layer 3 → output characteristic image I1; wherein, the Patch crop layer randomly crops the original RGB image into 256 × 3 cropped blocks for data augmentation by 512 × 482 × 3 so as to enhance the stability of the spectral reconstruction network; the LN regularization layer is used for carrying out normalization processing on input data, and the input data is reduced to 0-1, so that the convergence speed of the spectral reconstruction network is increased; con3 × 3 layers are convolution operations, the convolution kernel size is 3 × 3, the convolution step size is 1, the padding value is 0, and the number of feature maps is 64.
3. The spectral reconstruction method based on the 3D attention mechanism full-channel fusion network according to claim 1, wherein in the step 2, the hyperspectral feature generation module performs weighting fusion on different feature information by using the feature image I1 output in the step 1 and 3 dual residual channel-spatial attention mechanism blocks, and the used weighting parameters are λ1、λ2、λ3、λ4
The dual residual channel-space attention mechanism block mainly consists of a dual residual structure and a 3D channel-space attention mechanism module,
the structure of the first double residual channel-space attention mechanism block is as follows: the characteristic image I1 obtained in the step 1 serves as an input characteristic → Conv13 layer → PReLu layer → Conv23 layer → residual structural layer → PReLu layer → Conv33 layer → 3D-CSAB layer → residual structural layer → output as a characteristic image I3; wherein each Conv3 × 3 layer is convolution operation, the convolution kernel size is 3 × 3, the convolution step size is 1, the padding value is 0, and the feature mapping number is 64, wherein an output feature image of the Conv33 × 3 layer is called I4; two PReLu as activation functions to introduce more non-linearity and speed up convergence; the two residual error structural layers are mainly subjected to residual error connection; the 3D-CSAB layer generates an attention map by capturing a joint channel and a spatial feature by using a 3D convolution layer, and then calculates the product of the input feature before 3D convolution and the generated attention map to obtain a new feature image I5;
the structure of the 3D-CSAB layer is as follows in sequence: the feature image I4 output by the Conv33 layer 3 in the first double residual channel-space attention mechanism block as input → 3D Conv layer → Sigmoid layer → channel-space attention layer → output as feature image I5; the 3D Conv layer is convolution operation, the convolution kernel size is 3 x 3, the convolution step size is 1, the filling value is 1, and the feature mapping number is 64; the Sigmoid layer generates an attention weight graph from 0 to 1 between channel-spatial; the channel-space attention layer has the function of coding the positions needing attention or suppression, wherein the positions needing attention are closer to 1, and the positions needing suppression are closer to 0; then multiplying the channel-space attention map by the input feature image I4 in a pixel-by-pixel multiplication mode to obtain a new output feature image I5, wherein the size of the feature image I5 is 256 × 64;
the input feature image I4 and the output feature image I5 of the 3D-CSAB layer are transition feature images forming the feature image I3;
taking the output feature image I3 of the first double residual channel-space attention mechanism block as the input of the second double residual channel-space attention mechanism block, taking the output of the second double residual channel-space attention mechanism block as the input of the third double residual channel-space attention mechanism block, continuously enhancing the learning capability of the spectral reconstruction network on features by stacking three double residual channel-space attention mechanism blocks with the same structure, and simultaneously generating a feature image I2 by weighting and fusing the feature image I1 and the 3 double residual channel-space attention mechanism blocks.
4. The spectral reconstruction method based on the 3D attention mechanism full-channel fusion network according to claim 1, wherein in the step 3, the structure of the reconstruction module sequentially comprises: the feature image I2 output in step 2 is used as input → Conv1 layer 1 → output is image I6; wherein, the Conv1 × 1 layer is convolution operation, the convolution kernel size is 1 × 1, the convolution step size is 1, the padding value is 0, and the feature mapping number is 31.
5. The spectral reconstruction method based on the 3D attention mechanism full-channel fusion network according to claim 1, wherein in the step 4, the specific process is as follows: the data sets adopted during the spectrum reconstruction network training are paired RGB images and real hyperspectral images, the reconstruction loss of the hyperspectral image I6 generated by the original RGB images and the corresponding real hyperspectral images can be calculated, and the spectrum reconstruction network provided by continuous optimization is obtained through the minimum loss in the network training.
CN202110897604.0A 2021-08-05 2021-08-05 Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network Withdrawn CN113793261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110897604.0A CN113793261A (en) 2021-08-05 2021-08-05 Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110897604.0A CN113793261A (en) 2021-08-05 2021-08-05 Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network

Publications (1)

Publication Number Publication Date
CN113793261A true CN113793261A (en) 2021-12-14

Family

ID=78877184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110897604.0A Withdrawn CN113793261A (en) 2021-08-05 2021-08-05 Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network

Country Status (1)

Country Link
CN (1) CN113793261A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN115700727A (en) * 2023-01-03 2023-02-07 湖南大学 Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN115719309A (en) * 2023-01-10 2023-02-28 湖南大学 Spectrum super-resolution reconstruction method and system based on low-rank tensor network
CN116563649A (en) * 2023-07-10 2023-08-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN115700727A (en) * 2023-01-03 2023-02-07 湖南大学 Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN115719309A (en) * 2023-01-10 2023-02-28 湖南大学 Spectrum super-resolution reconstruction method and system based on low-rank tensor network
CN116563649A (en) * 2023-07-10 2023-08-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device
CN116563649B (en) * 2023-07-10 2023-09-08 西南交通大学 Tensor mapping network-based hyperspectral image lightweight classification method and device

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN113793261A (en) Spectrum reconstruction method based on 3D attention mechanism full-channel fusion network
CN113240613B (en) Image restoration method based on edge information reconstruction
CN112150521B (en) Image stereo matching method based on PSMNet optimization
CN111861961A (en) Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN113111760B (en) Light-weight graph convolution human skeleton action recognition method based on channel attention
CN115484410B (en) Event camera video reconstruction method based on deep learning
Liu et al. RB-Net: Training highly accurate and efficient binary neural networks with reshaped point-wise convolution and balanced activation
CN114418850A (en) Super-resolution reconstruction method with reference image and fusion image convolution
CN109949217A (en) Video super-resolution method for reconstructing based on residual error study and implicit motion compensation
CN114757862B (en) Image enhancement progressive fusion method for infrared light field device
CN117474781A (en) High spectrum and multispectral image fusion method based on attention mechanism
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN114463176B (en) Image super-resolution reconstruction method based on improved ESRGAN
Wang et al. VPU: a video-based point cloud upsampling framework
Tang et al. A deep map transfer learning method for face recognition in an unrestricted smart city environment
CN115294182A (en) High-precision stereo matching method based on double-cross attention mechanism
CN114972024A (en) Image super-resolution reconstruction device and method based on graph representation learning
Li et al. Refined Division Features Based on Transformer for Semantic Image Segmentation
CN113537292A (en) Multi-source domain adaptation method based on tensor high-order mutual attention mechanism
CN111951177B (en) Infrared image detail enhancement method based on image super-resolution loss function
Peng et al. RAUNE-Net: A Residual and Attention-Driven Underwater Image Enhancement Method
CN117576402B (en) Deep learning-based multi-scale aggregation transducer remote sensing image semantic segmentation method
CN115909045B (en) Two-stage landslide map feature intelligent recognition method based on contrast learning
Ma et al. Cloud-EGAN: Rethinking CycleGAN from a feature enhancement perspective for cloud removal by combining CNN and transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211214