CN113888491A - Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features - Google Patents

Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features Download PDF

Info

Publication number
CN113888491A
CN113888491A CN202111136667.0A CN202111136667A CN113888491A CN 113888491 A CN113888491 A CN 113888491A CN 202111136667 A CN202111136667 A CN 202111136667A CN 113888491 A CN113888491 A CN 113888491A
Authority
CN
China
Prior art keywords
local
feature
image
module
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111136667.0A
Other languages
Chinese (zh)
Other versions
CN113888491B (en
Inventor
胡建文
刘耀庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202111136667.0A priority Critical patent/CN113888491B/en
Publication of CN113888491A publication Critical patent/CN113888491A/en
Application granted granted Critical
Publication of CN113888491B publication Critical patent/CN113888491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multilevel hyperspectral image progressive hyper-segmentation method and a multilevel hyperspectral image progressive hyper-segmentation system based on non-local features. The feature graph input by each feature extraction level module after the feature extraction at the current level is added with the feature graph output by the primary convolution, then the super-resolution reconstructed image is obtained by the transposition convolution and the up-sampling, and then the super-resolution reconstructed image is output to the next feature extraction level module by the step-by-step convolution and the down-sampling. The method realizes the combined extraction of the local features and the global features at each level, can effectively improve the spatial resolution of the hyperspectral reconstructed image, and has higher reconstruction quality.

Description

Multilevel hyperspectral image progressive and hyper-resolution method and system based on non-local features
Technical Field
The invention relates to a hyperspectral image processing technology, in particular to a multilevel hyperspectral image progressive hyper-resolution method and system based on non-local features.
Background
In the hyperspectral image, spectral information is abundant. The hyperspectral image is widely applied to the fields of internal and external quality detection of agricultural products, mineral exploration, vegetation detection and the like. However, due to hardware limitations of hyperspectral imaging sensors, the spatial resolution of hyperspectral images is typically low. This seriously affects the practical applications such as hyperspectral image classification and target detection. Therefore, it is important to reconstruct a high-resolution hyperspectral image from a low-resolution hyperspectral image. Compared with the RGB image, the hyperspectral image contains more wave bands, and strong correlation exists among the wave bands. Therefore, the task of super-resolution of hyperspectral images is more difficult than that of natural images (RGB images). Many super-resolution methods fuse high-resolution auxiliary images (such as panchromatic images, RGB images, and multispectral images) and low-resolution hyperspectral images to obtain high-resolution hyperspectral images. This method can achieve good performance. However, in practice, these fusion methods are limited by the lack of auxiliary images. Therefore, the super-resolution method of the single hyperspectral image is more widely and flexibly applied in practice.
The single hyper-spectral image super-resolution is to reconstruct a high-resolution image from a single low-resolution image. The traditional single image super-resolution method mainly comprises two types. The first type is interpolation. Interpolation-based methods are simple and fast, but the reconstructed images are generally blurred. The second is to use the effective spatial or spectral statistical distribution as a priori, such as fully-variant, sparse regularization, to describe edges and textures. However, these methods require a priori to achieve better performance, and an effective a priori is often difficult to obtain. In recent years, deep Convolutional Neural Networks (CNNs) have been widely used in various image processing fields, and have achieved good results. Some CNN-based hyperspectral super-resolution methods have been proposed. Such as 3D-FCNN and MCNet, etc., which demonstrate the advantage of CNN over traditional methods in image super resolution and end-to-end mapping. But most of the CNN-based hyperspectral hyper-resolution methods are not good at extracting non-local features. Therefore, how to jointly extract local and global features of an image and reconstruct better spatial details of a hyperspectral image become problems to be solved.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a multilevel hyperspectral image progressive super-resolution method and a multilevel hyperspectral image progressive super-resolution system based on non-local features.
In order to solve the technical problems, the invention adopts the technical scheme that:
a multilevel hyperspectral image progressive hyper-segmentation method based on non-local features comprises the steps that a low-resolution image is input into a pre-trained multilevel progressive network to obtain a reconstructed high-resolution image, the multilevel progressive network comprises a preliminary feature extraction module, a final feature extraction module and a plurality of feature extraction level modules based on local and non-local features, the feature extraction level modules are sequentially arranged between the preliminary feature extraction module and the final feature extraction module, a feature graph input by each feature extraction level module after the feature extraction at the current level is added with a feature graph output by preliminary convolution, a super-segmented reconstructed image is obtained by means of transposition convolution and up-sampling, and then the super-segmented reconstructed image is output to a next feature extraction level module through step-by-step convolution and down-sampling.
Optionally, the feature extraction stage module includes a plurality of local and non-local feature extraction modules and two sets of asymmetric convolutions, the local and non-local feature extraction modules are densely connected, so that the output of each local and non-local feature extraction module is superimposed on the original input of the feature extraction stage module and the output of all previous local and non-local feature extraction modules, and the superimposed feature map is reduced in dimension by 1 × 1 × 1 convolution and then used as the input of the next local and non-local feature extraction module or two sets of asymmetric convolutions, the two sets of asymmetric convolutions are sequentially connected in cascade, and each set of asymmetric convolutions includes a 1 × 1 × 3 convolution layer and a 3 × 3 × 1 convolution layer, and the 1 × 1 × 3 convolution layer and the 3 × 3 × 1 convolution layer are followed by a ReLU activation function layer.
Optionally, the local and non-local feature extraction modules include a Ghost module, a non-local channel attention module, and another Ghost module, which are connected in sequence, where the Ghost module is configured to extract local features, the non-local feature extraction module is configured to extract non-local features, and an output of the Ghost module on the output side is connected to an original input of the Ghost module on the input side through a residual error.
Optionally, the Ghost module includes a 1 × 1 × 1 three-dimensional convolution and two 3 × 3 × 3 convolutions, the input feature maps are subjected to dimensionality reduction on channel dimensions through the 1 × 1 × 1 three-dimensional convolution to obtain feature maps half of the number of the original input feature maps, the feature maps subjected to dimensionality reduction are subjected to depth-by-depth convolution, the depth-by-depth convolution is grouped on the channel dimensions, then the 3 × 3 × 3 convolutions are respectively used for feature extraction, the obtained feature maps and the feature maps obtained through dimensionality reduction are spliced in the channel dimensions to obtain a final output feature map, and the final output feature map is equal to the input feature map in each dimension.
Optionally, the non-local channel attention module comprises:
a space-spectrum gradient calculation module for calculating gradient characteristic diagram I in width, height and spectrum dimensionsG
3 three-dimensional convolutions of 1 × 1 × 1, respectivelyGPerforming linear operation to obtain a characteristic diagram Q, a characteristic diagram K and a characteristic diagram V, wherein the characteristic diagram K and the characteristic diagram V change the shape of a matrix into a BHW multiplied by C matrix through deformation, the characteristic diagram Q changes the shape and is transposed to obtain a C multiplied by BHW matrix, wherein H is the height of a reconstructed high-resolution image, W is the width of the reconstructed high-resolution image, B is the number of spectral bands of the reconstructed high-resolution image, and C is the number of channels of the reconstructed high-resolution image;
an attention module, which is used for performing softmax operation on the characteristic diagram K along the columns to obtain a matrix KS, multiplying the characteristic diagram Q and the matrix KS to obtain a C multiplied by C matrix,performing softmax operation on the C multiplied by C matrix along the row to obtain an attention diagram, performing matrix multiplication on the characteristic diagram V and the attention diagram to obtain the global edge characteristic of the characteristic diagram, and connecting the global edge characteristic with the gradient characteristic diagram I through residual errorsGThe addition further enhances the edge information of the feature map.
Optionally, the preliminary feature extraction module and the final feature extraction module each comprise a 3 × 3 × 3 convolution layer and a ReLU activation function layer connected to each other.
Optionally, the method further includes the step of training the multi-stage progressive network in advance:
s1) selecting a part of image areas from the hyperspectral images, rotating and cutting the part of image areas into blocks to obtain an initial training set, wherein the size of each hyperspectral image block in the initial training set is H multiplied by W multiplied by B; carrying out Gaussian blur and downsampling on high-resolution image blocks in the initial training set to obtain low-resolution images of different downsampling levels, and forming training sets and test sets of different super-multiples from the initial training set and the hyperspectral low-resolution images obtained by downsampling;
s2) constructing the multistage progressive network;
s3) inputting the training set into a multi-stage progressive network for training, simultaneously using an MSE loss and spectral gradient loss optimization network, using an Adam optimizer to iteratively update the weights of all layers in the multi-stage progressive network, and storing the weight of each layer in the trained multi-stage progressive network;
s4) testing the multistage progressive network by using the test set, taking the low-resolution hyperspectral image in the test set as the input of the multistage progressive network to obtain a hyperspectral super-resolution image, evaluating the visual quality and the reference index of the output hyperspectral super-resolution image, and judging that the multistage progressive network is trained completely if the evaluation result meets the requirement; otherwise, jumping to execute step S3) to continue training the multi-level progressive network.
Optionally, when the MSE loss and the spectral gradient loss are used to optimize the network in step S3), the function expression of the loss function is:
Figure BDA0003282266790000031
Figure BDA0003282266790000032
in the above formula, LosstotalRepresenting a loss function, N being the number of modules of the feature extraction stage included in the multi-stage progressive network, LMSEIs a loss of MSE, LGFor loss of spectral gradient, IHRRepresenting a reference picture, InRepresenting the reconstructed image output by the nth characteristic extraction stage module; i isHR(I, j, k) is a reference image IHRHas the coordinates of (I, j, k) pixel point, In(I, j, k) is the reconstructed image I output by the nth feature extraction stage modulenHas the coordinates of (I, j, k) pixel point, IG HRFor the gradient of the reference image in the spectral dimension, IG nThe gradient in the spectral dimension of the reconstructed image for the nth order.
In addition, the invention also provides a multilevel hyperspectral image progressive hyper-segmentation system based on local and non-local characteristics, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the multilevel hyperspectral image progressive hyper-segmentation method based on the non-local characteristics.
Furthermore, the present invention also provides a computer readable storage medium having stored therein a computer program programmed or configured to execute the non-local feature based multi-level hyper-spectral image progressive hyper-segmentation method.
Compared with the prior art, the invention has the following advantages: the method comprises the steps of inputting a low-resolution image into a pre-trained multistage progressive network to obtain a reconstructed high-resolution image, wherein the multistage progressive network comprises a preliminary feature extraction module, a final feature extraction module and a plurality of feature extraction stage modules which are sequentially arranged between the preliminary feature extraction module and the final feature extraction module and are based on local and non-local features, a feature graph which is input by each feature extraction stage module and is subjected to feature extraction at the current stage is added with a feature graph which is output by preliminary convolution, then, the transposed convolution is performed to perform up-sampling to obtain a super-resolution reconstructed image, and then, the super-resolution reconstructed image is subjected to step-by-step convolution and is output to a next feature extraction stage module. The method realizes the combined extraction of the local features and the global features at each level, can effectively improve the spatial resolution of the hyperspectral reconstructed image, and has higher reconstruction quality.
Drawings
Fig. 1 is a schematic structural diagram of a multi-stage progressive network according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a feature extraction stage module in an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a Ghost module in the embodiment of the present invention.
FIG. 4 is a schematic structural diagram of a non-local channel attention module according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a training process of a multi-stage progressive network according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of the loss function principle of the multi-stage progressive network in the embodiment of the present invention.
FIG. 7 shows the results of hyperspectral image super-resolution simulation experiments for each method of a Chikusei data set under a 4-fold sampling level in the embodiment of the invention.
FIG. 8 shows a 4-fold super-resolution experimental image result of a Chikusei data set real hyperspectral image in the embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings and experiments.
As shown in fig. 1, the progressive hyper-segmentation method for a multilevel hyperspectral image based on non-local features in this embodiment includes inputting a low-resolution image into a pre-trained multilevel progressive network to obtain a reconstructed high-resolution image, where the multilevel progressive network includes a preliminary feature extraction module, a final feature extraction module, and a plurality of feature extraction level modules based on local and non-local features, which are sequentially arranged between the preliminary feature extraction module and the final feature extraction module, and the feature map input by each feature extraction level module after the feature extraction at this level is added to the feature map output by the preliminary convolution, and then the feature extraction level module outputs the super-segmented image to the next feature extraction level module after the convolution upsampling by the transposition.
As shown in fig. 2, the feature extraction stage module of this embodiment includes a plurality of local and non-local feature extraction modules and two sets of asymmetric convolutions, the plurality of local and non-local feature extraction modules are densely connected (feature information between modules can be fully utilized, which is beneficial to eliminating gradient disappearance and enhancing forward propagation), so that the output of each local and non-local feature extraction module is superimposed on the original input of the feature extraction stage module and the output of all the previous local and non-local feature extraction modules, and the superimposed feature map is reduced in dimension (ensuring that the feature map channel dimensions in the module are consistent) by 1 × 1 × 1 convolution and then used as the input of the next local and non-local feature extraction module or two sets of asymmetric convolutions, the two sets of asymmetric convolutions are sequentially connected in cascade, and each set of asymmetric convolutions includes a 1 × 1 × 3 convolution layer and a 3 × 3 × 1 convolution layer, and the 1 × 1 × 3 convolutional layer and the 3 × 3 × 1 convolutional layer are followed by a ReLU activation function layer. In the embodiment, the two groups of asymmetric convolutions replace the common three-dimensional convolution, so that the parameter quantity can be effectively reduced, and the characteristic extraction performance is not lost. The ReLU activation function is added after each convolution in the two groups of asymmetric convolutions, so that the convergence speed is accelerated, and the disappearance and explosion of the gradient are prevented.
As shown in fig. 2, the local and non-local feature extraction modules of this embodiment include a Ghost module, a non-local channel attention module, and another Ghost module, which are connected in sequence, the Ghost module is used to extract local features, the non-local feature extraction module is used to extract non-local features, and an output of the output side Ghost module is connected to an original input of the input side Ghost module through a residual error. And connecting the 2 Ghost modules and the 1 non-local channel attention module through residual errors to obtain a local and non-local feature extraction module which is used for effectively extracting local and global feature information of the feature map.
As shown in fig. 3, the Ghost module in this embodiment includes a 1 × 1 × 1 three-dimensional convolution and two 3 × 3 × 3 convolutions, the dimension of the input feature map is reduced in the channel dimension by the 1 × 1 × 1 three-dimensional convolution to obtain feature maps half the number of the original input feature maps, the feature maps after dimension reduction are subjected to depth-by-depth convolution, the depth-by-depth convolution is grouped in the channel dimension, and then the 3 × 3 × 3 convolutions are respectively used for feature extraction, and the obtained feature maps and the feature maps obtained by dimension reduction are spliced in the channel dimension to obtain a final output feature map, so that the final output feature map is equal to the input feature map in each dimension. The structure can effectively utilize the redundancy of the characteristic diagram in the convolution process, reduces the huge parameters of the common convolution in the convolution process, and fully utilizes the characteristic diagram information.
As shown in fig. 4, the non-local channel attention module of the present embodiment includes:
a space-spectrum gradient calculation module for calculating gradient characteristic diagram I in width, height and spectrum dimensionsG
3 three-dimensional convolutions of 1 × 1 × 1, respectivelyGPerforming linear operation to obtain a characteristic diagram Q, a characteristic diagram K and a characteristic diagram V, wherein the characteristic diagram K and the characteristic diagram V change the matrix shape into a BHW multiplied by C matrix through deformation (reshape), the characteristic diagram Q changes the shape and is transposed to obtain a C multiplied by BHW matrix, H is the height of the reconstructed high-resolution image, W is the width of the reconstructed high-resolution image, B is the number of spectral bands of the reconstructed high-resolution image, and C is the number of channels of the reconstructed high-resolution image;
the attention module is used for performing softmax operation on the feature map K along columns to obtain a matrix KS, multiplying the feature map Q and the matrix KS to obtain a C × C matrix, performing softmax operation on the C × C matrix along rows to obtain an attention map, performing matrix multiplication on the feature map V and the attention map to obtain global edge features of the feature map, and connecting the global edge features with the gradient feature map I through residual errorsGThe addition further enhances the edge information of the feature map.
In this embodiment, the operation module for solving space-spectrum gradient finds the gradient feature map I in the width, height and spectrum dimensions of the feature mapGThe specific method is to make difference on adjacent values of three dimensions and calculate absolute value, then calculate average value to obtain gradient characteristic diagram I of characteristic diagramGThe functional expression is as follows:
IG(i,j,k)=[|I(i,j,k)-I(i-1,j,k)|+|I(i,j,k)-I(i,j-1,k)|+|I(i,j,k)-I(i,j,k-1)|]/3
in the above formula, IG(I, j, k) is the gradient at I (I, j, k), I (I, j, k) is the coordinate of element (I, j, k) in feature map I, I (I-1, j, k) is the coordinate of element (I-1, j, k) in feature map I, I (I, j-1, k) is the coordinate of element (I, j-1, k) in feature map I, I (I, j, k-1) is the coordinate of element (I, j, k-1) in feature map I, all I, j, k-1 are the coordinates of element (I, j, k-1) in feature map I, all I, j, k-1G(I, j, k) form a gradient profile IG
In this embodiment, the function expression for performing softmax operation on the C × C matrix along the rows to obtain the attention map is as follows:
Figure BDA0003282266790000061
in the above formula, MijThe coordinates in the representative attention map M are (i, j) pixels, QiIs the ith element, K, in the feature map QSjIs the jth element in the matrix KS.
As shown in fig. 1, the preliminary feature extraction module and the final feature extraction module of the present embodiment each include a 3 × 3 × 3 convolution layer and a ReLU activation function layer connected to each other.
As shown in fig. 5, the embodiment further includes a step of training the multi-stage progressive network in advance:
s1) selecting a part of image areas from the hyperspectral images, rotating and cutting the part of image areas into blocks to obtain an initial training set, wherein the size of each hyperspectral image block in the initial training set is H multiplied by W multiplied by B; carrying out Gaussian blur and downsampling on high-resolution image blocks in the initial training set to obtain low-resolution images of different downsampling levels, and forming training sets and test sets of different super-multiples from the initial training set and the hyperspectral low-resolution images obtained by downsampling;
s2) constructing the multistage progressive network;
s3) inputting the training set into a multi-stage progressive network for training, simultaneously using an MSE loss and spectral gradient loss optimization network, using an Adam optimizer to iteratively update the weights of all layers in the multi-stage progressive network, and storing the weight of each layer in the trained multi-stage progressive network;
s4) testing the multistage progressive network by using the test set, taking the low-resolution hyperspectral image in the test set as the input of the multistage progressive network to obtain a hyperspectral super-resolution image, evaluating the visual quality and the reference index of the output hyperspectral super-resolution image, and judging that the multistage progressive network is trained completely if the evaluation result meets the requirement; otherwise, jumping to execute step S3) to continue training the multi-level progressive network.
In the embodiment, the reconstructed images of each level are supervised in the network model construction, and supervision and optimization are performed at each level, so that the modeling of the multi-level network is more reasonable. In this embodiment, when the MSE loss and spectral gradient loss optimization network is used in step S3), the function expression of the loss function is as follows:
Figure BDA0003282266790000062
Figure BDA0003282266790000063
in the above formula, LosstotalRepresenting a loss function, N being the number of modules of the feature extraction stage included in the multi-stage progressive network, LMSEIs a loss of MSE, LGFor loss of spectral gradient, IHRRepresenting a reference picture, InRepresenting the reconstructed image output by the nth characteristic extraction stage module; i isHR(I, j, k) is a reference image IHRHas the coordinates of (I, j, k) pixel point, In(I, j, k) is the reconstructed image I output by the nth feature extraction stage modulenHas the coordinates of (I, j, k) pixel point, IG HRFor the gradient of the reference image in the spectral dimension, IG nThe gradient in the spectral dimension of the reconstructed image for the nth order. Loss function LosstotalObjective function optimized for the final neural network for evaluating the difference between the reference image and the reconstructed image, as shown in FIG. 6, loss functionNumber LosstotalConsisting of the sum of the losses between the reconstructed image and the reference image at all levels. For the reconstructed image output by each feature extraction level module, the reference image is used for supervision in the embodiment, so that the gradient disappearance problem existing in the multi-level depth network can be reduced, the network parameters are optimized to the maximum extent, and the optimal reconstructed image result is obtained.
This example is further illustrated below in connection with simulation experiments:
the hardware platform of the simulation experiment of the embodiment: the processor is an Inter (R) core (TM) i7-7800X CPU, the main frequency is 3.50GHz, and the display card is an NVIDIA GTX 1080Ti GPU.
The software platform of the simulation experiment of the embodiment: python3.8 and pytorch 1.8.1.
The simulation experiment of the embodiment adopts images of the CAVE data set and the Chikusei data set to prove the practicability and effectiveness of the multi-stage progressive network proposed by the test. The hyperspectral images in the CAVE data sets have 31 spectral bands, and the hyperspectral images in the Chikusei data sets have 128 spectral bands. The experiment of the embodiment shows a simulated image super-resolution example and a real image super-resolution example, wherein the low-resolution image in the simulated example is obtained by performing Gaussian down-sampling on the high-resolution image. The real image example is a hyper-resolution image obtained by sending the original hyperspectral image into a model obtained by training of the simulation example. In this embodiment, the method is compared with a Bicubic (Bicubic) method, a natural image super-resolution method SRCNN and VDSR (very high spectral image super-resolution) method 3D-FCNN and MCNet. In order to prove the superiority of the embodiment in terms of spectral and spatial quality, the embodiment adopts three common image evaluation indexes. The mean peak signal-to-noise ratio (MPSNR) and the Mean Structural Similarity (MSSIM) were used to evaluate spatial quality and the Spectral Angle Mapping (SAM) was used to evaluate spectral quality. MPSNR evaluates pixel errors, while MSSIM describes structural similarity. The SAM measures the spectrum similarity through the angle between the spectrum vector of the reconstructed hyperspectral image and the ground true spectrum vector. Higher MPSNR and MSSIM values mean better quality, while lower SAM values mean less spectral distortion. The objective evaluation indexes of the super-resolution results of different methods are shown in tables 1 and 2.
Table 1: and evaluating indexes of average super-resolution results of the test sets in the CAVE data sets.
Figure BDA0003282266790000071
Figure BDA0003282266790000081
Table 2: and evaluating indexes of the average super-resolution results of the test sets in the Chikusei data set.
Figure BDA0003282266790000082
As can be seen from the analysis of the evaluation indexes of the reconstructed images at different sampling levels in the tables 1 and 2, all indexes of the method in the CAVE data set and the Chikusei data set are in the lead.
Simulating hyperspectral image super-resolution experimental analysis: in addition to the analysis of the objective evaluation index, the present embodiment also evaluates the visual quality of the super-resolution output image. FIG. 7 shows the output images of the reference image and the low resolution image on the Chikusei dataset for the respective methods. Wherein (a) in fig. 7 represents a reference image, and (b) to (g) in fig. 7 represent Bicubic, SRCNN, 3D-FCNN, VDSR, MCNet, and the output reconstructed image of the method of the present embodiment, respectively. The reconstructed image edge details of (b) to (d) in fig. 7 are seriously lost compared with the reference image. The details of the reconstructed image shown in (e) and (f) in fig. 7 are blurred, and have no good visual effect. Fig. 7 (g) shows the advantage of this embodiment in restoring edge detail, with the overall visual effect closest to the reference image.
Real hyperspectral image super-resolution experimental analysis: FIG. 8 shows the raw hyperspectral image super-resolution results for the Chikusei dataset. Wherein, (a) in fig. 8 represents an input original image, and (b) to (g) in fig. 8 represent Bicubic, SRCNN, 3D-FCNN, VDSR, MCNet and an output reconstructed image according to the method of this embodiment, respectively, and the weight parameter model adopted is consistent with the simulation experiment. Fig. 8 (g) shows the advantage of this embodiment in boundary repair, which has the clearest reconstructed boundary. And other methods have the phenomena of blurring, distortion and the like, so that the detail recovery is not good.
In addition, the embodiment also provides a non-local feature-based multilevel hyperspectral image progressive hyper-segmentation system, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the non-local feature-based multilevel hyperspectral image progressive hyper-segmentation method.
Furthermore, the present embodiment also provides a computer-readable storage medium, in which a computer program is stored, which is programmed or configured to execute the aforementioned non-local feature-based multi-level hyper-spectral image progressive hyper-segmentation method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A multilevel hyperspectral image progressive hyper-segmentation method based on non-local features is characterized by comprising the step of inputting a low-resolution image into a pre-trained multilevel progressive network to obtain a reconstructed high-resolution image, wherein the multilevel progressive network comprises a preliminary feature extraction module, a final feature extraction module and a plurality of feature extraction level modules based on local and non-local features, the feature extraction level modules are sequentially arranged between the preliminary feature extraction module and the final feature extraction module, a feature graph input by each feature extraction level module after the feature extraction of the level is added with a feature graph output by the preliminary convolution, a hyper-segmentation reconstructed image is obtained by means of transposed convolution and upsampled, and then the hyper-segmentation reconstructed image is output to a next feature extraction level module by means of step-by-step convolution and downsampling.
2. The non-local feature based multi-level hyperspectral image progressive hyper-segmentation method according to claim 1, wherein the feature extraction stage modules include a plurality of local and non-local feature extraction modules and two sets of asymmetric convolutions, the plurality of local and non-local feature extraction modules are densely connected, so that the output of each local and non-local feature extraction module is superposed with the original input of the feature extraction stage module and the output of all the previous local and non-local feature extraction modules, and the superposed characteristic diagram is subjected to dimensionality reduction through 1 multiplied by 1 convolution and then is used as the input of a next local and non-local characteristic extraction module or two groups of asymmetric convolutions which are connected in series in sequence, and each set of asymmetric convolutions comprises a 1 × 1 × 3 convolution layer and a 3 × 3 × 1 convolution layer, and the 1 × 1 × 3 convolution layer and the 3 × 3 × 1 convolution layer are followed by a ReLU activation function layer.
3. The non-local-feature-based multi-level hyperspectral image progressive hyper-segmentation method according to claim 2 is characterized in that the local and non-local feature extraction modules comprise a Ghost module, a non-local channel attention module and another Ghost module which are connected in sequence, the Ghost module is used for extracting local features, the non-local feature extraction module is used for extracting non-local features, and the output of the output side Ghost module is connected with the original input of the input side Ghost module through a residual error.
4. The non-local-feature-based multi-level hyperspectral image progressive hyper-segmentation method according to claim 3 is characterized in that the Ghost module comprises a 1 x 1 three-dimensional convolution and two 3 x 3 convolutions, the input feature maps are subjected to dimensionality reduction on channel dimensions through the 1 x 1 three-dimensional convolution to obtain feature maps which are half of the original input feature maps in number, the feature maps subjected to dimensionality reduction are subjected to depth-by-depth convolution, the depth-by-depth convolution is grouped on the channel dimensions and then respectively subjected to feature extraction through the 3 x 3 convolution, the obtained feature maps and the feature maps obtained through dimensionality reduction are spliced on the channel dimensions to obtain a final output feature map, and the final output feature map and the input feature map are equal in each dimension.
5. The non-local feature-based multi-level hyperspectral image progressive hyper-segmentation method according to claim 3, wherein the non-local channel attention module comprises:
a space-spectrum gradient calculation module for calculating gradient characteristic diagram I in width, height and spectrum dimensionsG
3 three-dimensional convolutions of 1 × 1 × 1, respectivelyGPerforming linear operation to obtain a characteristic diagram Q, a characteristic diagram K and a characteristic diagram V, wherein the characteristic diagram K and the characteristic diagram V change the shape of a matrix into a BHW multiplied by C matrix through deformation, the characteristic diagram Q changes the shape and is transposed to obtain a C multiplied by BHW matrix, wherein H is the height of a reconstructed high-resolution image, W is the width of the reconstructed high-resolution image, B is the number of spectral bands of the reconstructed high-resolution image, and C is the number of channels of the reconstructed high-resolution image;
the attention module is used for performing softmax operation on the feature map K along columns to obtain a matrix KS, multiplying the feature map Q and the matrix KS to obtain a C × C matrix, performing softmax operation on the C × C matrix along rows to obtain an attention map, performing matrix multiplication on the feature map V and the attention map to obtain global edge features of the feature map, and connecting the global edge features with the gradient feature map I through residual errorsGThe addition further enhances the edge information of the feature map.
6. The non-local feature based multi-level hyperspectral image progressive hyper-segmentation method according to claim 1, wherein the preliminary feature extraction module and the final feature extraction module each comprise a 3 x 3 convolution layer and a ReLU activation function layer connected to each other.
7. The non-local-feature-based multi-level hyperspectral image progressive hyper-segmentation method according to any one of claims 1 to 6 is characterized by further comprising the step of pre-training a multi-level progressive network:
s1) selecting a part of image areas from the hyperspectral images, rotating and cutting the part of image areas into blocks to obtain an initial training set, wherein the size of each hyperspectral image block in the initial training set is H multiplied by W multiplied by B; carrying out Gaussian blur and downsampling on high-resolution image blocks in the initial training set to obtain low-resolution images of different downsampling levels, and forming training sets and test sets of different super-multiples from the initial training set and the hyperspectral low-resolution images obtained by downsampling;
s2) constructing the multistage progressive network;
s3) inputting the training set into a multi-stage progressive network for training, simultaneously using an MSE loss and spectral gradient loss optimization network, using an Adam optimizer to iteratively update the weights of all layers in the multi-stage progressive network, and storing the weight of each layer in the trained multi-stage progressive network;
s4) testing the multistage progressive network by using the test set, taking the low-resolution hyperspectral image in the test set as the input of the multistage progressive network to obtain a hyperspectral super-resolution image, evaluating the visual quality and the reference index of the output hyperspectral super-resolution image, and judging that the multistage progressive network is trained completely if the evaluation result meets the requirement; otherwise, jumping to execute step S3) to continue training the multi-level progressive network.
8. The non-local feature-based multi-level hyperspectral image progressive hyper-segmentation method according to claim 7 is characterized in that when the MSE loss and spectral gradient loss optimization network is used in step S3), the function expression of the loss function is as follows:
Figure FDA0003282266780000021
wherein:
Figure FDA0003282266780000022
in the above formula, LosstotalRepresenting a loss function, N being the number of modules of the feature extraction stage included in the multi-stage progressive network, LMSEIs a loss of MSE, LGFor loss of spectral gradient, IHRRepresenting a reference picture, InRepresenting the reconstructed image output by the nth characteristic extraction stage module; i isHR(I, j, k) is a reference image IHRHas the coordinates of (I, j, k) pixel point, In(I, j, k) is the reconstructed image I output by the nth feature extraction stage modulenHas the coordinates of (I, j, k) pixel point, IG HRFor the gradient of the reference image in the spectral dimension, IG nThe gradient in the spectral dimension of the reconstructed image for the nth order.
9. A non-local feature based multi-level hyper-spectral image progressive hyper-segmentation system comprising a microprocessor and a memory connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the local and non-local feature based multi-level hyper-spectral image progressive hyper-segmentation method according to any one of claims 1 to 8.
10. A computer-readable storage medium having stored thereon a computer program programmed or configured to perform the non-local feature based multi-level hyper-spectral image progressive hyper-segmentation method according to any one of claims 1 to 8.
CN202111136667.0A 2021-09-27 2021-09-27 Multistage hyperspectral image progressive superdivision method and system based on non-local features Active CN113888491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136667.0A CN113888491B (en) 2021-09-27 2021-09-27 Multistage hyperspectral image progressive superdivision method and system based on non-local features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136667.0A CN113888491B (en) 2021-09-27 2021-09-27 Multistage hyperspectral image progressive superdivision method and system based on non-local features

Publications (2)

Publication Number Publication Date
CN113888491A true CN113888491A (en) 2022-01-04
CN113888491B CN113888491B (en) 2024-06-14

Family

ID=79007090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136667.0A Active CN113888491B (en) 2021-09-27 2021-09-27 Multistage hyperspectral image progressive superdivision method and system based on non-local features

Country Status (1)

Country Link
CN (1) CN113888491B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387470A (en) * 2022-01-11 2022-04-22 中船(浙江)海洋科技有限公司 Ship classification method and system based on lightweight convolutional neural network
CN114612479A (en) * 2022-02-09 2022-06-10 苏州大学 Medical image segmentation method based on global and local feature reconstruction network
CN114757831A (en) * 2022-06-13 2022-07-15 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN113139902A (en) * 2021-04-23 2021-07-20 深圳大学 Hyperspectral image super-resolution reconstruction method and device and electronic equipment
CN113222823A (en) * 2021-06-02 2021-08-06 国网湖南省电力有限公司 Hyperspectral image super-resolution method based on mixed attention network fusion
CN113222822A (en) * 2021-06-02 2021-08-06 西安电子科技大学 Hyperspectral image super-resolution reconstruction method based on multi-scale transformation
CN113362223A (en) * 2021-05-25 2021-09-07 重庆邮电大学 Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN113409191A (en) * 2021-06-02 2021-09-17 广东工业大学 Lightweight image super-resolution method and system based on attention feedback mechanism

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903255A (en) * 2019-03-04 2019-06-18 北京工业大学 A kind of high spectrum image Super-Resolution method based on 3D convolutional neural networks
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN113139902A (en) * 2021-04-23 2021-07-20 深圳大学 Hyperspectral image super-resolution reconstruction method and device and electronic equipment
CN113362223A (en) * 2021-05-25 2021-09-07 重庆邮电大学 Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN113222823A (en) * 2021-06-02 2021-08-06 国网湖南省电力有限公司 Hyperspectral image super-resolution method based on mixed attention network fusion
CN113222822A (en) * 2021-06-02 2021-08-06 西安电子科技大学 Hyperspectral image super-resolution reconstruction method based on multi-scale transformation
CN113409191A (en) * 2021-06-02 2021-09-17 广东工业大学 Lightweight image super-resolution method and system based on attention feedback mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
翟森;任超;熊淑华;占文枢;: "基于深度学习局部与非局部信息的单幅图像超分辨率重建", 现代计算机, no. 33, 25 November 2019 (2019-11-25) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387470A (en) * 2022-01-11 2022-04-22 中船(浙江)海洋科技有限公司 Ship classification method and system based on lightweight convolutional neural network
CN114387470B (en) * 2022-01-11 2024-05-31 中船(浙江)海洋科技有限公司 Ship classification method and system based on lightweight convolutional neural network
CN114612479A (en) * 2022-02-09 2022-06-10 苏州大学 Medical image segmentation method based on global and local feature reconstruction network
CN114612479B (en) * 2022-02-09 2023-03-24 苏州大学 Medical image segmentation method and device based on global and local feature reconstruction network
CN114757831A (en) * 2022-06-13 2022-07-15 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion
CN114757831B (en) * 2022-06-13 2022-09-06 湖南大学 High-resolution video hyperspectral imaging method, device and medium based on intelligent space-spectrum fusion

Also Published As

Publication number Publication date
CN113888491B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
Fan et al. Balanced two-stage residual networks for image super-resolution
CN113888491B (en) Multistage hyperspectral image progressive superdivision method and system based on non-local features
CN111429349B (en) Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
Behjati et al. Overnet: Lightweight multi-scale super-resolution with overscaling network
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN105488776B (en) Super-resolution image reconstruction method and device
CN110929736A (en) Multi-feature cascade RGB-D significance target detection method
CN111369487A (en) Hyperspectral and multispectral image fusion method, system and medium
Wu et al. Remote sensing image super-resolution via saliency-guided feedback GANs
US20230153946A1 (en) System and Method for Image Super-Resolution
CN110111276B (en) Hyperspectral remote sensing image super-resolution method based on space-spectrum information deep utilization
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
Lv et al. A novel image super-resolution algorithm based on multi-scale dense recursive fusion network
Hu et al. Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution
CN115147426B (en) Model training and image segmentation method and system based on semi-supervised learning
CN115331104A (en) Crop planting information extraction method based on convolutional neural network
Qin et al. Lightweight single image super-resolution with attentive residual refinement network
Xia et al. Meta-learning-based degradation representation for blind super-resolution
CN116797456A (en) Image super-resolution reconstruction method, system, device and storage medium
Yang et al. MRDN: A lightweight Multi-stage residual distillation network for image Super-Resolution
Li Image super-resolution using attention based densenet with residual deconvolution
Yang et al. Multilevel and multiscale network for single-image super-resolution
CN115937693A (en) Road identification method and system based on remote sensing image
Wang et al. Underwater image super-resolution using multi-stage information distillation networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant