CN113139902A - Hyperspectral image super-resolution reconstruction method and device and electronic equipment - Google Patents

Hyperspectral image super-resolution reconstruction method and device and electronic equipment Download PDF

Info

Publication number
CN113139902A
CN113139902A CN202110445524.1A CN202110445524A CN113139902A CN 113139902 A CN113139902 A CN 113139902A CN 202110445524 A CN202110445524 A CN 202110445524A CN 113139902 A CN113139902 A CN 113139902A
Authority
CN
China
Prior art keywords
network
hyperspectral image
color image
spectrum
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110445524.1A
Other languages
Chinese (zh)
Inventor
李岩山
陈世富
周李
罗文寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202110445524.1A priority Critical patent/CN113139902A/en
Publication of CN113139902A publication Critical patent/CN113139902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image super-resolution reconstruction method, a hyperspectral image super-resolution reconstruction device and electronic equipment, wherein the method comprises the following steps: the method comprises the steps that a spectrum reservation network which is constructed in advance is used for carrying out feature extraction on input hyperspectral image features to obtain spectrum information features, the spectrum reservation network comprises a plurality of cascaded spectrum reservation modules, and each spectrum reservation module integrates a first residual block and a channel attention mechanism; utilizing a pre-constructed color image guide network to perform feature extraction on input color image features corresponding to the hyperspectral image features to obtain spatial information features, wherein the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism; and inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.

Description

Hyperspectral image super-resolution reconstruction method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a hyperspectral image super-resolution reconstruction method and device and electronic equipment.
Background
The hyperspectral image (HSI) has dozens of even hundreds of wave bands, and the hyperspectral image not only has abundant space texture information, but also has abundant spectral information, so that the hyperspectral image is widely applied to a plurality of fields such as agriculture, medicine, military affairs and remote sensing. But due to hardware limitations of hyperspectral sensors and optical imaging systems, hyperspectral images have lower spatial resolution compared to color images, which severely limits their further applications and developments. Therefore, in recent years, hyper-spectral image super-resolution, which is one of the main techniques for improving the resolution of hyper-spectral images, has attracted much attention.
The hyperspectral image super-resolution is a technology for acquiring a high-resolution hyperspectral image from a low-resolution hyperspectral image, and the super-resolution technology based on a software method is an effective means for improving the spatial resolution of the hyperspectral image at present, so that the limitation of hardware conditions is overcome. The hyperspectral image super-resolution technology is mostly evolved from the color image super-resolution technology, for the color image super-resolution, many researches are proposed in recent decades, and recently, the color image super-resolution problem is greatly developed due to the use of a convolutional neural network. However, different from a color image, a hyperspectral image is composed of hundreds or thousands of spectral bands, spectral features of the hyperspectral image are difficult to extract, and only a convolutional neural network framework is directly applied to hyperspectral image, so that the phenomenon of spectral information loss of a reconstructed hyperspectral image is easily caused, and the spectral loss of super-resolution reconstruction is serious.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defect that the spectral information of the reconstructed hyperspectral image is lost due to the existing mode of directly using the convolutional neural network to realize the hyperspectral image hyper-resolution processing, so that the hyperspectral image super-resolution reconstruction method, the hyperspectral image super-resolution reconstruction device and the electronic equipment are provided.
According to a first aspect, the embodiment of the invention discloses a hyperspectral image super-resolution reconstruction method, which comprises the following steps: performing feature extraction on input hyperspectral image features by using a pre-constructed spectrum retention network to obtain spectrum information features, wherein the spectrum retention network comprises a plurality of cascaded spectrum retention modules, and each spectrum retention module integrates a first residual block and a channel attention mechanism; utilizing a pre-constructed color image guide network to perform feature extraction on input color image features corresponding to the hyperspectral image features to obtain spatial information features, wherein the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism; inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
Optionally, before the feature extraction is performed on the input hyperspectral image features by using the pre-constructed spectrum reservation network to obtain the spectral information features, the method further includes: the method comprises the steps of up-sampling an acquired hyperspectral image to enable the spatial resolution of the hyperspectral image to reach a preset resolution; and extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
Optionally, before the color image guidance network constructed in advance is used to perform feature extraction on the input color image features corresponding to the hyperspectral image features to obtain the spatial information features, the method further includes: determining whether a high-resolution color image in the same scene as the acquired hyperspectral image exists; if so, taking the image features of the high-resolution color image as color image features corresponding to the hyperspectral image features; and if the hyperspectral image does not exist, intercepting the characteristics of the channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics.
Optionally, the method further includes training a reconstruction network in advance by using training data until a loss value of a total loss function of the reconstruction network meets a target condition, so as to obtain a reconstruction network for hyperspectral image super-resolution reconstruction, where the reconstruction network is composed of the spectrum preserving network, the color image guiding network, and the empty spectrum restoring network.
Optionally, the total loss function is as follows:
lloss=α1lL22lsam3lL1
in the formula IL2Representing the spatial content loss of the hyperspectral image; lsamCharacterizing the spectral loss; lL1Characterizing color image content loss; alpha is alpha1、α2And alpha3Is a constant.
According to a second aspect, an embodiment of the present invention further discloses a hyperspectral image super-resolution reconstruction apparatus, including: the system comprises a spectrum information characteristic extraction module, a spectrum information characteristic extraction module and a spectrum information characteristic extraction module, wherein the spectrum information characteristic extraction module is used for performing characteristic extraction on input hyperspectral image characteristics by utilizing a pre-constructed spectrum reservation network to obtain spectrum information characteristics, the spectrum reservation network comprises a plurality of cascaded spectrum reservation modules, and each spectrum reservation module is integrated with a first residual block and a channel attention mechanism; the system comprises a spatial information feature extraction module, a spatial information feature extraction module and a hyperspectral image feature extraction module, wherein the spatial information feature extraction module is used for performing feature extraction on input color image features corresponding to hyperspectral image features by utilizing a pre-constructed color image guide network, the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism; and the characteristic fusion module is used for inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion so as to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
Optionally, the apparatus further comprises: the sampling module is used for up-sampling the acquired hyperspectral image so that the spatial resolution of the hyperspectral image reaches a preset resolution; and the shallow spectral feature extraction module is used for extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
Optionally, the apparatus further comprises: the first determination module is used for determining whether a high-resolution color image in the same scene with the acquired hyperspectral image exists; a second determining module, configured to, if the high-resolution color image exists, take an image feature of the high-resolution color image as a color image feature corresponding to the hyperspectral image feature; and the third determining module is used for intercepting the characteristics of a channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics if the hyperspectral image does not exist.
According to a third aspect, an embodiment of the present invention further discloses an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the steps of the hyper-spectral image super-resolution reconstruction method according to the first aspect or any one of the optional embodiments of the first aspect.
According to a fourth aspect, the present invention further discloses a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the hyperspectral image super-resolution reconstruction method according to the first aspect or any of the optional embodiments of the first aspect.
The technical scheme of the invention has the following advantages:
the invention provides a hyperspectral image super-resolution reconstruction method/device, which utilizes a pre-constructed spectrum reservation network to extract the characteristics of an input hyperspectral image to obtain spectrum information characteristics, wherein the spectrum reservation network comprises a plurality of cascaded spectrum reservation modules, each spectrum reservation module integrates a first residual block and a channel attention mechanism, the pre-constructed color image guide network is utilized to extract the characteristics of the input color image characteristics corresponding to the hyperspectral image characteristics to obtain spatial information characteristics, the color image guide network comprises a plurality of cascaded color image guide modules, each color image guide module integrates a second residual block and a spatial attention mechanism, and the obtained spectrum information characteristics and the spatial information characteristics are input into the pre-constructed space spectrum recovery network integrating the channel attention mechanism and the spatial attention mechanism to be subjected to characteristic fusion so as to complete the hyperspectral image super-resolution reconstruction (ii) a Compared with the prior art that hyper-resolution reconstruction of the hyper-spectral image is realized by only directly extracting the spectral features of the hyper-spectral image by using the convolutional neural network, the hyper-spectral image super-resolution reconstruction method has the advantages that the spectral information features are extracted by using the constructed spectral retention network, the spatial information features of the color image corresponding to the hyper-spectral image features are extracted by using the color image guide network, the super-resolution reconstruction of the hyper-spectral image is completed by fusing the extracted spectral information features and the spatial information features, and the hyper-resolution reconstruction effect of the hyper-spectral image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a specific example of a super-resolution hyperspectral image reconstruction method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a specific network structure of a hyper-spectral image super-resolution reconstruction method in an embodiment of the invention;
FIG. 3 is a schematic diagram of a specific network structure of a hyper-spectral image super-resolution reconstruction method in an embodiment of the invention;
FIGS. 4A-4B are schematic diagrams of a specific network structure of a hyperspectral image super-resolution reconstruction method in an embodiment of the invention;
FIG. 5 is a schematic diagram of a specific network structure of a hyper-spectral image super-resolution reconstruction method in an embodiment of the invention;
FIGS. 6A-6B are schematic diagrams of a specific network structure of a hyperspectral image super-resolution reconstruction method in an embodiment of the invention;
FIG. 7 is a schematic diagram of a specific network structure of a hyper-spectral image super-resolution reconstruction method in an embodiment of the invention;
FIG. 8 is a schematic block diagram of a specific example of a hyper-spectral image super-resolution reconstruction apparatus according to an embodiment of the present invention;
fig. 9 is a diagram of a specific example of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention discloses a hyperspectral image super-resolution reconstruction method, which comprises the following steps of:
101, performing feature extraction on input hyperspectral image features by using a pre-constructed spectrum retention network to obtain spectral information features, wherein the spectrum retention network comprises a plurality of cascaded spectrum retention modules, and each spectrum retention module integrates a first residual block and a channel attention mechanism.
Illustratively, one difficulty in the hyperspectral image super-resolution reconstruction process is to keep the spectral information undistorted, so the embodiment of the application uses the spectrum reservation network to extract the spectral features so as to keep the spectral information of the hyperspectral image. As shown in fig. 2, a spectrum preserving module facing to the super-resolution of the hyperspectral image is designed in the spectrum preserving network, and is named as Residual Channel Information Attention (RCIA), and the structure of the RCIA is shown in fig. 3. In the spectrum reservation network, a nonlinear mapping network is formed by cascading a plurality of RCIA modules and is used for extracting the depth spectrum characteristics of a hyperspectral image to obtain a spectrum characteristic with strong representation capability. This process can be expressed as:
Figure BDA0003035424660000071
wherein, YcIs a spectral information characteristic map of the output of the c-th RCIA,
Figure BDA0003035424660000072
representing the convolution operation of the c-th RCIA,
Figure BDA0003035424660000073
representing the network parameters in the c-th RCIA.
As the plurality of spectrum reservation modules are cascaded to construct a relatively deep network, for the image super-resolution task, when the network is relatively deep, the network constructed by the plurality of spectrum reservation modules is difficult to train, and the performance is difficult to improve. In the scheme described in the embodiment of the invention, on one hand, the number of layers of the network is increased by cascading a plurality of RCIA to ensure that effective deep spectral features are extracted, and on the other hand, the difficulty of network training is reduced.
Extracting the shallow spectral feature Y extracted in the initial stage0Sequentially passing from the first RCIA to the last RCIA, and then passing through a convolution layer with a kernel size of 1 × 1 to obtain a shallow spectral feature Y0And the spectral information characteristic Y of the last RCIA outputcThe fusion is performed, and this process can be expressed as:
Figure BDA0003035424660000074
wherein, WYDThe weight of the convolutional layer after the jump connection is obtained by network training.
The first half of each RCIA is a first residual block composed of a plurality of convolutional layers and a RELU function, and the second half is Channel Information Attention (CIA) proposed for hyperspectral image spectral Information preservation, as shown in fig. 3. The residual block inside the RCIA is to make the local information better transferred, and learn the local residual information, and the CIA is improved based on the original channel attention, as shown in fig. 4A.
As shown in fig. 4B, in the prior art, the channel attention first performs Global average Pooling (GP) on feature information in one channel to obtain a one-dimensional feature vector; then two full Connected layers (FC) are arranged next to each other, the first full Connected layer is used for reducing the dimension of the feature vector through feature compression, and the second full Connected layer is used for reconstructing the compressed features back to the original dimension through feature dimension increasing; and then activating each channel through a sigmoid excitation function to obtain a channel attention vector, giving a larger weight to the information of the important channel, and inhibiting the weight of the information of the non-important channel.
The existing channel attention model is mainly applied to tasks such as target identification or target detection, a hyperspectral image super-resolution reconstruction task needs characteristics as fine as pixel level, and some important characteristic information is inhibited or even lost through dimension reduction and dimension increasing operations of a full connection layer; in addition, the hyperspectral image super-resolution reconstruction needs to keep original spectral information undistorted, and the channel level scaling of the existing channel attention can cause the spectral information distortion. Therefore, the existing channel attention model cannot be directly applied to the hyper-spectral image super-resolution, and therefore, the channel information attention model CIA oriented to the hyper-spectral image super-resolution is provided in the embodiment of the application. The original full connection layer in the channel attention is deleted in the CIA, so that the problem of spectral distortion possibly caused by dimension reduction and dimension increase operations is avoided, and subsequent experimental results show that the performance of the network is remarkably improved after the full connection layer is deleted.
And step 102, performing feature extraction on the input color image features corresponding to the hyperspectral image features by using a pre-constructed color image guide network to obtain spatial information features, wherein the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism.
Illustratively, a plurality of cascaded color image guidance modules are arranged in a pre-constructed color image guidance network for extracting Spatial Information characteristics of a color image, each color image guidance module integrates a second Residual block and a Spatial Attention mechanism, and in the embodiment of the present application, the color image guidance module is named Residual Spatial Information Attention (RSIA). Since a color image contains a large amount of detailed information compared with a hyperspectral image, but the information usually needs a very deep network to be extracted, the depth of the network is also increased by stacking and cascading a plurality of RSIAs to better extract spatial information. As shown in fig. 2, in the stacking process, the output of the c-th RSIA is represented as:
Figure BDA0003035424660000081
wherein,
Figure BDA0003035424660000082
representing the convolution operation of the c-th RSIA,
Figure BDA0003035424660000083
representing the network parameters in the c-th RSIA. Here, a hopping connection is also introduced to make the deep network more stable, as shown in the following formula:
Figure BDA0003035424660000084
wherein, WZDThe convolutional layer weight after the jump connection. In the process, the cascade RSIA can extract detail information belonging to high frequency, the jump connection can extract low frequency information through residual learning, and the low frequency information and the high frequency information can be simultaneously acquired through the convolution operation.
The RSIA in the embodiment of the present application is divided into two parts, the first part is a second residual block, and the second part is Spatial Information Attention (SIA) designed for extracting Spatial Information features, see fig. 5 specifically. The function of the second residual block is the same as that of the first residual block in the spectrum recovery network, and is not described herein again.
The following introduces SIA proposed in the embodiments of the present application:
as shown in fig. 6A, an existing spatial attention structure is configured to generate a feature map describing spatial information from a feature map through operations of average pooling and maximum pooling, highlight a region of significant information by aggregating spatial information of the feature map, activate features through a sigmoid function, and multiply the feature map with a previous feature map, so that different spatial regions are multiplied by different weights to obtain a spatial attention feature map, and information of different spatial regions is enhanced to different degrees.
However, the existing average pooling and maximum pooling operations of spatial attention focus on local area information, which is obtained by averaging and maximizing salient information into a single feature map, while ignoring local detail information, which may play a role in other tasks such as target detection or target recognition, but are not beneficial for the task of super-resolution of hyperspectral images, and thus an improvement of the existing spatial attention structure is needed. The SIA structure proposed in the embodiment of the present application is as shown in fig. 6B, and the average pooling and the maximum pooling are replaced by a convolution layer to generate a three-dimensional spatial attention feature map, so as to obtain the attention weights of all pixels, and the attention weights refined to the pixels of each channel are favorable for extracting the local spatial information features.
In the process of training a color image guide network in advance, the color image guide module guides the network to learn the space detail information of a hyperspectral image, and restricts the parameter learning of the network through the space loss generated by the output and input high-resolution color images of the network until the color image guide network meeting the use requirement is obtained.
And 103, inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
Exemplarily, since the spatial structure difference between a low-resolution hyperspectral image (LR-HSI) Y and a high-resolution color image (color image) Z is large and the feature distributions of the two images are different, it is difficult to directly fuse the two different domain images to generate a high-resolution hyperspectral image.
Specifically, as shown in FIG. 2, the spectrum is preserved by the network output characteristic YDAnd color image guidance module output characteristics ZDFusion was performed by:
FSSF=YD+ZD (5)
in order to better incorporate the Spatial Information features of the color image features into the hyperspectral image and ensure that the spectral Information is not distorted, a new module is designed in the proposed Spatial spectrum recovery network for the fusion of the spectral Information features and the Spatial Information features, and the embodiment of the application is named as Spatial Channel Information Attention (SCIA). To FSSFFurther optimization is carried out through SCIA, and the extracted spectral information characteristics and spatial information characteristics are fully utilized.
The structure of the SCIA is shown in fig. 7, and the proposed CIA in the spectrum recovery network pools information in the spectral dimension directly, and ignores local information in each spectrum. SIA does not process spectral dimensions, but rather produces varying degrees of information of interest by convolution. In SCIA, in order to better fuse the information of two dimensions of spectrum and space, two modules of CIA and SIA are combined into one module, and the information acquisition of the spectrum dimension and the space dimension is considered at the same time. As shown in fig. 7, the design SCIA has three branches, the first branch is responsible for transferring the fused original information, the second branch is responsible for fusing the spectral information features through the global maximum pooling operation and the sigmoid function activation operation, and the third branch is responsible for fusing the spatial information features through the convolution operation and the sigmoid function activation operation. By the method, not only the weight of the spectral dimension can be generated, but also the attention weight of the spatial dimension pixel level can be generated, the spectral information and the spatial information are covered, finally jump connection is introduced, and the hyperspectral image is reconstructed through a convolution layer.
Compared with the prior art that hyper-resolution reconstruction of hyper-spectral images is realized by only directly extracting the spectral features of the hyper-spectral images by using a convolutional neural network, the hyper-spectral image super-resolution reconstruction method provided by the embodiment of the invention extracts the spectral information features by using the constructed spectral retention network and extracts the spatial information features of the color images corresponding to the hyper-spectral image features by using the color image guide network, and the extracted spectral information features and the spatial information features are fused to complete the hyper-spectral image super-resolution reconstruction, thereby improving the effect of the hyper-spectral image super-resolution reconstruction.
And spectrum information features are extracted through a spectrum reservation network and space information features are extracted through a color image guide network, and the spectrum information features and the space information features are fused through a space spectrum recovery network. Specifically, firstly, a residual channel information attention for extracting spectral information features is designed in a spectrum reservation network, and spectral information is stored for the reconstruction of a hyperspectral image; secondly, designing residual spatial information attention for extracting spatial information features in a color image guide network, and realizing information complementation with a hyperspectral image by extracting the spatial information features of the color image; and finally, designing a spatial channel information attention for fusion of spectral information characteristics and spatial information characteristics in the spatial spectrum recovery network, and simultaneously considering the information of spectral dimensions and spatial dimensions so as to realize a better hyperspectral image reconstruction effect. According to the embodiment of the application, experiments are carried out on data sets of three different scenes, namely CAVE, Harvard and Pavia Center, the detail information of a hyperspectral image can be obviously improved on the premise of storing spectral information, and the experimental effect data are shown in the following table 1.
The embodiment of the invention uses 4 evaluation indexes to evaluate the quality: (1) root Mean Square Error (RMSE); (2) average Peak Signal-to-Noise Ratio (MPSNR); (3) average structural Similarity (the Mean Structure Similarity Index, MSSIM); (4) spectral angle mapping (Spectral SngleMapper, SAM). Wherein, RMSE, MPSNR and MSSIM are used to evaluate the reconstruction quality of spatial information, the average value of each band index is used to represent the quality index of the whole image, and SAM is used to evaluate the reconstruction quality of spectral information.
The RMSE is used for calculating the error between the reconstructed image and the original high-resolution hyperspectral image and is obtained by the square root of the average square error, and the smaller the value of the RMSE is, the closer the reconstructed image is to the original high-resolution hyperspectral image is.
MPSNR is the ratio of the maximum value of the reconstructed image to the mean square error, and when the MPSNR value is larger, the super-resolution quality is higher.
The MSSIM is based on human visual perception, is very sensitive to the structural consistency of the original high-resolution hyperspectral image, and when the MSSIM is larger, the reconstructed image is closer to the original high-resolution hyperspectral image.
The SAM is used for evaluating the spectral information storage of each pixel, and the smaller the SAM is, the more close the reconstructed image to the original high-resolution hyperspectral image is, and the optimal value of the SAM is 0.
The data in table 1 show that, in three data sets, the experimental results of the embodiment of the present application are superior to most of the existing methods in evaluating spatial recovery quality indexes MPSNR, RMSE, and MSSIM, and the SAM for evaluating the spectral information storage is also superior to most of the existing methods, so that the embodiment of the present application can significantly improve the detailed information of the hyperspectral image on the premise of storing the spectral information.
Table 1 experimental results data
Figure BDA0003035424660000121
Figure BDA0003035424660000131
As an optional embodiment of the present invention, before step 101, the method further comprises: the method comprises the steps of up-sampling an acquired hyperspectral image to enable the spatial resolution of the hyperspectral image to reach a preset resolution; and extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
Exemplarily, before the spectrum reservation network performs the spectrum information feature extraction, Y is up-sampled to make the spatial resolution of Y reach the preset resolution, and the size of the preset resolution is not limited in the embodiment of the application, and specifically may be 2 times, 4 times or 8 times of the original hyperspectral image. The upsampling process may be an interpolation or machine learning based method, and then is performed in combination with the convolutional layer, generally at the beginning of the network or at the end of the network. In the embodiment of the application, the up-sampling operation of the hyperspectral image is completed by adopting an interpolation method, and then a 3D convolutional layer is used for obtaining Y from the up-samplingupMiddle extracted shallow spectral feature Y0As shown in the following formula:
Figure BDA0003035424660000132
wherein,
Figure BDA0003035424660000133
representing a 3D convolutional layer operation with a core size of 1 x 1. Then Y is put0Inputting the data into a spectrum reservation network for deep spectral feature extraction.
As an optional embodiment of the present invention, before step 102, the method further comprises:
and determining whether a high-resolution color image in the same scene as the acquired hyperspectral image exists.
And if the high-resolution color image exists, taking the image characteristics of the high-resolution color image as the color image characteristics corresponding to the high-spectrum image characteristics.
Illustratively, when there is a high-resolution color image in the same scene as the hyperspectral image, before extracting spatial information features using the color image guidance network, as shown in fig. 2, shallow spatial information features Z are extracted from an input color image Z by a layer of 2D convolutional layers having a convolutional kernel size of 1 × 10Then Z is0Extracting depth spatial information features from input RSIA:
Figure BDA0003035424660000134
wherein,
Figure BDA0003035424660000135
representing a 2D convolutional layer operation with a core size of 1 x 1.
And if the hyperspectral image does not exist, intercepting the characteristics of the channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics.
Illustratively, when a high-resolution color image in the same scene as the hyperspectral image does not exist, three channels are intercepted from the low-resolution hyperspectral image as the input of the color image guide network, and the three intercepted channels are selected according to the channels of three wave bands of red, green and blue.
As an optional embodiment of the present invention, the method further comprises: and training a reconstruction network in advance by utilizing training data until the loss value of the total loss function of the reconstruction network meets a target condition to obtain the reconstruction network for the hyperspectral image super-resolution reconstruction, wherein the reconstruction network consists of the spectrum retention network, the color image guide network and the empty spectrum recovery network.
As an alternative embodiment of the present invention, the total loss function is represented by the following formula:
lloss=α1lL22lsam3lL1 (8)
in the formula IL2Representing the spatial content loss of the hyperspectral image; lsamCharacterizing the spectral loss; lL1Characterizing color image content loss; alpha is alpha1、α2And alpha3Is a constant.
Illustratively, the loss function plays a very important role in improving the performance of the network, and in order to obtain better spatial quality of the hyperspectral image without causing spectral distortion, a reconstruction loss and color image content loss joint training reconstruction network is designed as shown in fig. 2. The reconstruction loss calculation is to reconstruct the error between the hyperspectral image output by the network and the input label hyperspectral image to restrain the consistency of the spectrum and the space of the hyperspectral image; the loss of color image content is the error generated by the high resolution color image output and input by the computing network, and the significance of the loss is that the guiding network learns the spatial information of the color image and recovers the spatial detail texture information. Reconstruction loss function l in the embodiment of the present applicationreconAs shown in the following formula:
lrecon=α1lL22lsam (9)
wherein lL2Is a loss of spatial content of the hyperspectral image,/samIs the spectral loss, α1And alpha2Is a constant that balances the gap between the loss of spatial content and the loss of spectrum, preventing them from being too different. lL2Is the L2 norm between the high resolution hyperspectral image HR and the network output X as a label, which is calculated as follows:
Figure BDA0003035424660000141
wherein h, w and n respectively represent the width, height and channel number of the hyperspectral image. The L2 norm serving as the loss of spatial content can enlarge the difference between the maximum error and the minimum error, is sensitive to abnormal points and is favorable for recovering low-frequency informationAnd the L2 norm is related to the peak signal-to-noise ratio (PSNR), so that the PSNR can be improved in a targeted manner. In order to reduce spectral distortions, SAM is also introduced as a spectral loss function, the network is trained by minimizing the spectral angle between the high-resolution hyperspectral image HR and the network output X, the spectral loss function lsamIs defined as follows:
Figure BDA0003035424660000151
wherein HR isx,ySpectral vector, X, representing the high-resolution hyperspectral image HR at position (X, y)x,yAnd the spectral vectors of the hyperspectral images reconstructed by the network at the same spatial position are represented.
The color image guiding network aims to learn the spatial information of a high-resolution color image and recover the detail information from a low-resolution hyperspectral image as much as possible, so a color image content loss function is designed to guide the optimization of the network, and an L1 norm is selected as the color image content loss, and the calculation method is as follows:
Figure BDA0003035424660000152
the color image content loss is different from the hyperspectral image space content loss, firstly, three channels are selected from a reconstructed hyperspectral image to form a pseudo-color image, the three selected channels are channels with red, green and blue wave bands, then, an L1 norm between the formed pseudo-color image and a high-resolution color image Z is calculated, and finally, the average value of L1 norms of the three channels is used as the color image content loss. The L1 norm is used as color image content loss because the L2 norm can penalize large loss, but the L1 norm is less beneficial to the network to recover more detailed information when the small loss effect is less than that of L1, high frequency information is missing, and the correlation with the perceived image quality is poor.
Finally, the reconstruction loss and the color image content loss are combined as a total loss function of the training network:
lloss=lrecon3lL1=α1lL22lsam3lL1 (13)
in the designed total loss function, the hyperspectral image space content is lostL2Plays an important role in recovering low-frequency information of the image, and has the spectral loss lsamSpectral distortion can be reduced, and color image content loss can guide the network to recover more high-frequency detail information. In the experimental setup, a is determined from experimental experience1,α2And alpha3Set to 10, 0.01 and 10, respectively.
The embodiment of the invention also discloses a hyperspectral image super-resolution reconstruction device, as shown in fig. 8, the device comprises:
the spectral information feature extraction module 801 is configured to perform feature extraction on input hyperspectral image features by using a pre-constructed spectral preservation network to obtain spectral information features, where the spectral preservation network includes a plurality of cascaded spectral preservation modules, and each spectral preservation module integrates a first residual block and a channel attention mechanism;
a spatial information feature extraction module 802, configured to perform feature extraction on an input color image feature corresponding to a hyperspectral image feature by using a pre-constructed color image guidance network to obtain a spatial information feature, where the color image guidance network includes a plurality of cascaded color image guidance modules, and each color image guidance module integrates a second residual block and a spatial attention mechanism;
and a feature fusion module 803, configured to input the spectral information features and the spatial information features into a pre-constructed spatial spectrum recovery network for feature fusion to complete super-resolution reconstruction of a hyperspectral image, where the spatial spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
Compared with the prior art that hyper-resolution reconstruction of hyper-spectral images is realized by only directly extracting the spectral features of the hyper-spectral images by using a convolutional neural network, the hyper-spectral image super-resolution reconstruction device provided by the embodiment of the invention extracts the spectral information features by using the constructed spectral retention network and extracts the spatial information features of the color images corresponding to the hyper-spectral image features by using the color image guide network, and fuses the extracted spectral information features and the spatial information features to complete the hyper-spectral image super-resolution reconstruction, thereby improving the effect of the hyper-resolution reconstruction of the hyper-spectral images.
As an optional embodiment of the present invention, the apparatus further comprises:
the sampling module is used for up-sampling the acquired hyperspectral image so that the spatial resolution of the hyperspectral image reaches a preset resolution;
and the shallow spectral feature extraction module is used for extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
As an optional embodiment of the present invention, the apparatus further comprises:
the first determination module is used for determining whether a high-resolution color image in the same scene with the acquired hyperspectral image exists;
a second determining module, configured to, if the high-resolution color image exists, take an image feature of the high-resolution color image as a color image feature corresponding to the hyperspectral image feature;
and the third determining module is used for intercepting the characteristics of a channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics if the hyperspectral image does not exist.
As an optional embodiment of the present invention, the apparatus further comprises: and the training module is used for training the reconstruction network in advance by utilizing training data until the loss value of the total loss function of the reconstruction network meets a target condition to obtain the reconstruction network for reconstructing the hyperspectral image super-resolution, and the reconstruction network consists of the spectrum retention network, the color image guide network and the empty spectrum recovery network.
As an alternative embodiment of the present invention, the total loss function is represented by the following formula:
lloss=α1lL22lsam3lL1
in the formula IL2Representing the spatial content loss of the hyperspectral image; lsamCharacterizing the spectral loss; lL1Characterizing color image content loss; alpha is alpha1、α2And alpha3Is a constant.
An embodiment of the present invention further provides an electronic device, as shown in fig. 9, the electronic device may include a processor 901 and a memory 902, where the processor 901 and the memory 902 may be connected by a bus or in another manner, and fig. 9 takes the connection by the bus as an example.
Processor 901 may be a Central Processing Unit (CPU). The Processor 901 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 902, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the hyper-spectral image super-resolution reconstruction method in the embodiments of the present invention. The processor 901 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 902, so as to implement the hyperspectral image super-resolution reconstruction method in the above method embodiment.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 901, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected to the processor 901 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 902 and when executed by the processor 901 perform a hyper-spectral image super-resolution reconstruction method as in the embodiment shown in fig. 1.
The details of the electronic device may be understood with reference to the corresponding related description and effects in the embodiment shown in fig. 1, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A hyperspectral image super-resolution reconstruction method is characterized by comprising the following steps:
performing feature extraction on input hyperspectral image features by using a pre-constructed spectrum retention network to obtain spectrum information features, wherein the spectrum retention network comprises a plurality of cascaded spectrum retention modules, and each spectrum retention module integrates a first residual block and a channel attention mechanism;
utilizing a pre-constructed color image guide network to perform feature extraction on input color image features corresponding to the hyperspectral image features to obtain spatial information features, wherein the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism;
inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
2. The method according to claim 1, wherein before the extracting the features of the input hyperspectral image features by using the pre-constructed spectrum reservation network to obtain the spectral information features, the method further comprises:
the method comprises the steps of up-sampling an acquired hyperspectral image to enable the spatial resolution of the hyperspectral image to reach a preset resolution;
and extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
3. The method according to claim 1, wherein before the feature extraction of the input color image features corresponding to the hyperspectral image features by using the pre-constructed color image guidance network to obtain the spatial information features, the method further comprises:
determining whether a high-resolution color image in the same scene as the acquired hyperspectral image exists;
if so, taking the image features of the high-resolution color image as color image features corresponding to the hyperspectral image features;
and if the hyperspectral image does not exist, intercepting the characteristics of the channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics.
4. The method of any one of claims 1-3, further comprising
And training a reconstruction network in advance by utilizing training data until the loss value of the total loss function of the reconstruction network meets a target condition to obtain the reconstruction network for the hyperspectral image super-resolution reconstruction, wherein the reconstruction network consists of the spectrum retention network, the color image guide network and the empty spectrum recovery network.
5. The method of claim 4, wherein the total loss function is expressed as:
lloss=α1lL22lsam3lL1
in the formula IL2Representing the spatial content loss of the hyperspectral image; lsamCharacterizing the spectral loss; lL1Characterizing color image content loss; alpha is alpha1、α2And alpha3Is a constant.
6. A hyperspectral image super-resolution reconstruction device is characterized by comprising:
the system comprises a spectrum information characteristic extraction module, a spectrum information characteristic extraction module and a spectrum information characteristic extraction module, wherein the spectrum information characteristic extraction module is used for performing characteristic extraction on input hyperspectral image characteristics by utilizing a pre-constructed spectrum reservation network to obtain spectrum information characteristics, the spectrum reservation network comprises a plurality of cascaded spectrum reservation modules, and each spectrum reservation module is integrated with a first residual block and a channel attention mechanism;
the system comprises a spatial information feature extraction module, a spatial information feature extraction module and a hyperspectral image feature extraction module, wherein the spatial information feature extraction module is used for performing feature extraction on input color image features corresponding to hyperspectral image features by utilizing a pre-constructed color image guide network, the color image guide network comprises a plurality of cascaded color image guide modules, and each color image guide module integrates a second residual block and a spatial attention mechanism;
and the characteristic fusion module is used for inputting the spectral information characteristics and the spatial information characteristics into a pre-constructed space spectrum recovery network for characteristic fusion so as to complete super-resolution reconstruction of the hyperspectral image, wherein the space spectrum recovery network integrates a channel attention mechanism and a spatial attention mechanism.
7. The apparatus of claim 6, further comprising:
the sampling module is used for up-sampling the acquired hyperspectral image so that the spatial resolution of the hyperspectral image reaches a preset resolution;
and the shallow spectral feature extraction module is used for extracting shallow spectral features from the up-sampled hyperspectral image by using the 3D convolutional layer, and inputting the shallow spectral features serving as the hyperspectral image features into the spectrum reservation network.
8. The apparatus of claim 6, further comprising:
the first determination module is used for determining whether a high-resolution color image in the same scene with the acquired hyperspectral image exists;
a second determining module, configured to, if the high-resolution color image exists, take an image feature of the high-resolution color image as a color image feature corresponding to the hyperspectral image feature;
and the third determining module is used for intercepting the characteristics of a channel where the red, green and blue wave bands are located from the acquired hyperspectral image as the color image characteristics corresponding to the hyperspectral image characteristics if the hyperspectral image does not exist.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the hyper-spectral image super-resolution reconstruction method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the hyperspectral image super-resolution reconstruction method according to any of the claims 1 to 5.
CN202110445524.1A 2021-04-23 2021-04-23 Hyperspectral image super-resolution reconstruction method and device and electronic equipment Pending CN113139902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110445524.1A CN113139902A (en) 2021-04-23 2021-04-23 Hyperspectral image super-resolution reconstruction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110445524.1A CN113139902A (en) 2021-04-23 2021-04-23 Hyperspectral image super-resolution reconstruction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113139902A true CN113139902A (en) 2021-07-20

Family

ID=76811832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110445524.1A Pending CN113139902A (en) 2021-04-23 2021-04-23 Hyperspectral image super-resolution reconstruction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113139902A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744136A (en) * 2021-09-30 2021-12-03 华中科技大学 Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN113888491A (en) * 2021-09-27 2022-01-04 长沙理工大学 Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features
CN113947643A (en) * 2021-10-22 2022-01-18 长安大学 Method, device and equipment for reconstructing RGB image into hyperspectral image and storage medium
CN115700727A (en) * 2023-01-03 2023-02-07 湖南大学 Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN116452420A (en) * 2023-04-11 2023-07-18 南京审计大学 Hyper-spectral image super-resolution method based on fusion of Transformer and CNN (CNN) group
CN117132473A (en) * 2023-10-20 2023-11-28 中国海洋大学 Underwater rare earth spectrum detection method and spectrum super-resolution reconstruction model building method thereof
CN117421671A (en) * 2023-12-18 2024-01-19 南开大学 Frequency self-adaptive static heterogeneous graph node classification method for quote network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360147A (en) * 2018-09-03 2019-02-19 浙江大学 Multispectral image super resolution ratio reconstruction method based on Color Image Fusion
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110930315A (en) * 2019-10-23 2020-03-27 西北工业大学 Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM
CN111274869A (en) * 2020-01-07 2020-06-12 中国地质大学(武汉) Method for classifying hyperspectral images based on parallel attention mechanism residual error network
CN111445390A (en) * 2020-02-28 2020-07-24 天津大学 Wide residual attention-based three-dimensional medical image super-resolution reconstruction method
WO2020160485A1 (en) * 2019-01-31 2020-08-06 Alfred E. Mann Institute For Biomedical Engineering At The University Of Southern California A hyperspectral imaging system
CN112184553A (en) * 2020-09-25 2021-01-05 西北工业大学 Hyperspectral image super-resolution method based on depth prior information
CN112488924A (en) * 2020-12-21 2021-03-12 深圳大学 Image super-resolution model training method, reconstruction method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360147A (en) * 2018-09-03 2019-02-19 浙江大学 Multispectral image super resolution ratio reconstruction method based on Color Image Fusion
WO2020160485A1 (en) * 2019-01-31 2020-08-06 Alfred E. Mann Institute For Biomedical Engineering At The University Of Southern California A hyperspectral imaging system
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110930315A (en) * 2019-10-23 2020-03-27 西北工业大学 Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM
CN111274869A (en) * 2020-01-07 2020-06-12 中国地质大学(武汉) Method for classifying hyperspectral images based on parallel attention mechanism residual error network
CN111445390A (en) * 2020-02-28 2020-07-24 天津大学 Wide residual attention-based three-dimensional medical image super-resolution reconstruction method
CN112184553A (en) * 2020-09-25 2021-01-05 西北工业大学 Hyperspectral image super-resolution method based on depth prior information
CN112488924A (en) * 2020-12-21 2021-03-12 深圳大学 Image super-resolution model training method, reconstruction method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANG HE,ET AL.: "Spectral Response Function-Guided Deep Optimization-Driven Network for Spectral Super-Resolution", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *
SANGHYUN WOO ,ET AL.: "CBAM: Convolutional Block Attention Module", 《EUROPEAN CONFERENCE ON COMPUTER VISION》 *
YANSHAN LI ,ET AL.: "Extreme-constrained spatial-spectral corner detector for image-level hyperspectral image classification", 《ELSEVIER:PATTERN RECOGNITION LETTTERS》 *
杨勇等: "基于渐进式特征增强网络的超分辨率重建算法", 《信号处理》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888491A (en) * 2021-09-27 2022-01-04 长沙理工大学 Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features
CN113744136A (en) * 2021-09-30 2021-12-03 华中科技大学 Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN113947643A (en) * 2021-10-22 2022-01-18 长安大学 Method, device and equipment for reconstructing RGB image into hyperspectral image and storage medium
CN115700727A (en) * 2023-01-03 2023-02-07 湖南大学 Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN116452420A (en) * 2023-04-11 2023-07-18 南京审计大学 Hyper-spectral image super-resolution method based on fusion of Transformer and CNN (CNN) group
CN116452420B (en) * 2023-04-11 2024-02-02 南京审计大学 Hyper-spectral image super-resolution method based on fusion of Transformer and CNN (CNN) group
CN117132473A (en) * 2023-10-20 2023-11-28 中国海洋大学 Underwater rare earth spectrum detection method and spectrum super-resolution reconstruction model building method thereof
CN117132473B (en) * 2023-10-20 2024-01-23 中国海洋大学 Underwater rare earth spectrum detection method and spectrum super-resolution reconstruction model building method thereof
CN117421671A (en) * 2023-12-18 2024-01-19 南开大学 Frequency self-adaptive static heterogeneous graph node classification method for quote network
CN117421671B (en) * 2023-12-18 2024-03-05 南开大学 Frequency self-adaptive static heterogeneous graph node classification method for quote network

Similar Documents

Publication Publication Date Title
CN113139902A (en) Hyperspectral image super-resolution reconstruction method and device and electronic equipment
WO2021184891A1 (en) Remotely-sensed image-based terrain classification method, and system
Shao et al. Remote sensing image fusion with deep convolutional neural network
Zhou et al. Pyramid fully convolutional network for hyperspectral and multispectral image fusion
Zhang et al. Pan-sharpening using an efficient bidirectional pyramid network
EP3948764B1 (en) Method and apparatus for training neural network model for enhancing image detail
CN109886870B (en) Remote sensing image fusion method based on dual-channel neural network
Umer et al. Deep generative adversarial residual convolutional networks for real-world super-resolution
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN108960345A (en) A kind of fusion method of remote sensing images, system and associated component
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN112819737B (en) Remote sensing image fusion method of multi-scale attention depth convolution network based on 3D convolution
Li et al. Deep learning methods in real-time image super-resolution: a survey
Hu et al. Pan-sharpening via multiscale dynamic convolutional neural network
CN111179177A (en) Image reconstruction model training method, image reconstruction method, device and medium
An et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers
Cheng et al. Zero-shot image super-resolution with depth guided internal degradation learning
CN108780570A (en) Use the system and method for the image super-resolution of iteration collaboration filtering
CN105447840B (en) The image super-resolution method returned based on active sampling with Gaussian process
KR20200140713A (en) Method and apparatus for training neural network model for enhancing image detail
CN114581347B (en) Optical remote sensing spatial spectrum fusion method, device, equipment and medium without reference image
CN110059728A (en) RGB-D image vision conspicuousness detection method based on attention model
Beaulieu et al. Deep image-to-image transfer applied to resolution enhancement of sentinel-2 images
Guan et al. Srdgan: learning the noise prior for super resolution with dual generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication