CN114266957B - Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation - Google Patents

Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation Download PDF

Info

Publication number
CN114266957B
CN114266957B CN202111342185.0A CN202111342185A CN114266957B CN 114266957 B CN114266957 B CN 114266957B CN 202111342185 A CN202111342185 A CN 202111342185A CN 114266957 B CN114266957 B CN 114266957B
Authority
CN
China
Prior art keywords
hyperspectral
data
resolution
super
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111342185.0A
Other languages
Chinese (zh)
Other versions
CN114266957A (en
Inventor
王素玉
车其晓
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111342185.0A priority Critical patent/CN114266957B/en
Publication of CN114266957A publication Critical patent/CN114266957A/en
Application granted granted Critical
Publication of CN114266957B publication Critical patent/CN114266957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a hyperspectral image super-resolution restoration method based on data augmentation in a multi-degradation mode, which belongs to the technical field of image processing and comprises the steps of acquiring hyperspectral images, preprocessing a hyperspectral data set, cutting the images in the hyperspectral data set according to a certain rule to generate non-overlapped subgraphs and overlapped blocks, and taking the subgraphs and the overlapped blocks as test data and training data respectively; performing data amplification on the test data and the training data and increasing Gaussian noise to obtain noisy low-resolution test data and training data; constructing a hyperspectral image super-resolution model; training the hyperspectral image super-resolution model by adopting training data, so as to obtain a trained hyperspectral image super-resolution model; and inputting the test data into a trained hyperspectral image super-resolution model to obtain a hyperspectral image with super resolution. The invention amplifies the data and improves the performance of the hyperspectral image super-resolution model.

Description

Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral image super-resolution restoration method based on multi-degradation data augmentation.
Background
The hyperspectral telechelic image is a high-dimensional image, has the advantages of identifying weak information and quantitatively detecting, and has high application value in military and civil fields. Although the traditional hyperspectral image super-resolution method based on fusion achieves a certain effect, the method requires that the input low-resolution hyperspectral image and the high-resolution auxiliary image can be well registered, and under the actual application scene, the acquisition of the auxiliary image with good registration is very difficult; in order to solve the problem of difficult acquisition of auxiliary information, a sparse dictionary learning-based or low-rank approximation method is adopted, however, the manual prior information only can reflect part of the characteristics of the data. With the development of deep learning in recent years, convolutional Neural Networks (CNNs) exhibit strong advantages over conventional methods in terms of the mapping relationship and modeling ability between low-resolution images and high-resolution images.
At present, a hyperspectral image super-resolution restoration algorithm based on CNN has greatly progressed, for example, a GDRNN method for dividing adjacent wave bands into a plurality of groups and carrying out intra-group fusion and inter-group fusion mechanisms is adopted; fastHyDe methods associated with their low-rank and self-similarity features using sparse representation of images; the depth Prior (deep Prior) is extended to the hyperspectral imaging field and the DEEP HYPERSPECTRAL Prior method of a three-dimensional convolution network. Although these deep learning methods are greatly improved compared with the traditional methods, most of them have the problems of few training samples, single sampling method, weak spatial and spectral correlation and the like. Because hyperspectral images are easily affected by various degradation factors in the acquisition process, hyperspectral imagers, imaging environments, imaging objects and the like are key factors affecting imaging quality, and different degradation factors can cause the difference of acquired information, most of the current methods can generate problems of poor data robustness and the like for different degradation factors.
Disclosure of Invention
In order to solve the above problems, the present invention provides a hyperspectral image super-resolution restoration method based on multi-degradation data augmentation, which includes:
Acquiring a hyperspectral image to obtain a hyperspectral data set;
Preprocessing the hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule to generate non-overlapped subgraphs and overlapped blocks, and taking the subgraphs and the overlapped blocks as test data and training data respectively;
Performing data amplification on the test data and the training data in a degradation mode to obtain the test data and the training data with low resolution;
constructing a hyperspectral image super-resolution model;
Training the hyperspectral image super-resolution model by adopting the training data, so as to obtain a trained hyperspectral image super-resolution model;
Inputting the test data into the trained hyperspectral image super-resolution model to obtain a hyperspectral image with super resolution;
the method comprises the steps of constructing a hyperspectral image super-resolution model;
Inputting the training data into a branch-global space spectrum prior network to obtain a hyperspectral image after super-division, and downsampling the hyperspectral image after super-division through a Lanczos resampling filter to obtain a Lanczos downsampled image;
Obtaining a loss function according to the loss between the hyperspectral image and the hyperspectral image after hyperspectral data set and the loss between the Lanczos downsampled image and the amplified test data and the amplified images in the training data;
determining the gradient of a convolution layer in the branch-global space spectrum prior network according to the loss function and the gradient descent method;
And according to the gradient of the convolution layer in the branch-global spatial spectrum prior network, iterative training is performed by using an Adam optimizer until PSNR, SSIM, SAM indexes of the branch-global spatial spectrum prior network are not improved any more, and training is completed.
Preferably, the branch-global spatial spectrum prior network comprises a plurality of branch networks and a global network, wherein the branch networks sequentially comprise a first 3*3 convolution layer, a first spatial spectrum deep layer feature extraction module, a first upsampling module and a first 1*1 convolution layer, and the global network comprises a second 3*3 convolution layer, a second spatial spectrum deep layer feature extraction module, a second sampling module and a second 1*1 convolution layer;
the first spatial spectrum deep feature extraction module and the second spatial spectrum deep feature extraction module both comprise a spatial residual error module and a spectral attention residual error module, and the images processed by the plurality of parallel branch networks are input into the global network to obtain a final super-resolution hyperspectral image.
Preferably, performing data amplification on the test data and the training data in a degradation manner, and obtaining the test data and the training data with low resolution includes:
Respectively downsampling the test data and the training data by adopting bicubic interpolation, nearest neighbor interpolation and bilinear interpolation so as to obtain a low-resolution image;
The low-resolution image obtained by bicubic interpolation is subjected to Gaussian noise processing again.
Preferably, the loss function formula is:
Ltotal=L(δ,γ)+L(μ,ε);
Wherein: l (δ, γ) is expressed as the difference between the hyperspectral image in the hyperspectral dataset and the hyperspectral image pair output by the corresponding model; l (μ, ε) is the difference between the low resolution images in the test data and the training data and the pair of low resolution images after Lanczos downsampling; delta and gamma represent hyperspectral images in the hyperspectral dataset and hyperspectral images after superdivision, respectively, mu represents low resolution images in the test data and the training data, epsilon represents low resolution images after Lanczos downsampling by gamma.
Preferably, the formula L (x, y) is:
L(x,y)=L1+αLSSTV
wherein: alpha is used to balance the loss function, set to 0.001; x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; l 1 is the average absolute error function; l SSTV is the spatial spectrum total variation function.
Preferably, the spatial spectrum total variation function L SSTV and the average absolute error L 1 are respectively:
wherein: x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; Δ h is the horizontal gradient of y; Δ w is the vertical gradient of y; Δ c is the spectral gradient of y; n is the number of images.
Compared with the prior art, the invention has the beneficial effects that:
The invention can simulate the amplification of low-quality images generated by imaging equipment, weather and other reasons as much as possible by amplifying the data in a multi-degradation mode, and the loss function is the loss between hyperspectral images in hyperspectral data sets and hyperspectral images after hyperspectral, plus the loss between Lanczos downsampled images and the amplified test data and the amplified images in training data, thereby improving the performance of the hyperspectral image super-resolution model.
Drawings
FIG. 1 is a schematic flow chart of a hyperspectral image super-resolution restoration method based on data augmentation in a multi-degradation mode;
FIG. 2 is a schematic diagram of a branch-global spatial spectrum prior network of the present invention;
FIG. 3 is a graph showing the effect of a degradation mode according to the present invention compared with the restoration mode according to the prior art;
FIG. 4 is a graph showing the effect of the restoration of the prior art by another degradation method according to the present invention;
FIG. 5 is a graph comparing the effect of CHIKUSEI dataset with prior art restoration in the present invention;
fig. 6 is a graph comparing the effects of Pavia dataset of the present invention with prior art restorations.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a hyperspectral image super-resolution restoration method based on multi-degradation data augmentation includes:
Acquiring a hyperspectral image to obtain a hyperspectral data set;
preprocessing a hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule to generate non-overlapped subgraphs and overlapped blocks, and taking the subgraphs and the overlapped blocks as test data and training data respectively;
specifically, the image surrounding in the hyperspectral dataset is cut off, leaving the image of the central area; then cutting the image of the central area to generate a plurality of non-overlapped subgraphs as test data; extracting overlapping blocks from the rest part after cutting to serve as training data;
Carrying out data amplification on the test data and the training data in a degradation mode to obtain low-resolution test data and training data;
specifically, respectively downsampling test data and training data by bicubic interpolation, nearest neighbor interpolation and bilinear interpolation to obtain a low-resolution image;
The low-resolution image obtained by bicubic interpolation is subjected to Gaussian noise processing again.
Constructing a hyperspectral image super-resolution model;
training the hyperspectral image super-resolution model by adopting training data, so as to obtain a trained hyperspectral image super-resolution model;
Inputting the test data into a trained hyperspectral image super-resolution model to obtain a hyperspectral image with super resolution;
The method for constructing the hyperspectral image super-resolution model comprises the following steps of;
Inputting training data into a branch-global space spectrum prior network to obtain a hyperspectral image after superdivision, and downsampling the hyperspectral image after superdivision through a Lanczos resampling filter to obtain a Lanczos downsampled image;
Obtaining a loss function according to the loss between the hyperspectral image and the hyperspectral image after hyperspectral data set and the loss between the Lanczos downsampled image and the amplified test data and training data;
Determining the gradient of a convolution layer in a branch-global space spectrum prior network according to a loss function and a gradient descent method;
According to the gradient of the convolution layer in the branch-global space spectrum prior network, iterative training is carried out by using an Adam optimizer until PSNR, SSIM, SAM indexes of the branch-global space spectrum prior network are not improved any more, and training is completed.
Referring to fig. 2, the branch-global spatial spectrum prior network includes a plurality of parallel branch networks and a global network, the branch networks sequentially include a first 3*3 convolution layer, a first spatial spectrum deep feature extraction module, a first upsampling module and a first 1*1 convolution layer, and the global network includes a second 3*3 convolution layer, a second spatial spectrum deep feature extraction module, a second sampling module and a second 1*1 convolution layer;
The first spatial spectrum deep feature extraction module and the second spatial spectrum deep feature extraction module both comprise a spatial residual error module and a spectral attention residual error module, and the images processed by the plurality of parallel branch networks are input into a global network to obtain a final super-resolution hyperspectral image.
Still further, the loss function formula is:
Ltotal=L(δ,γ)+L(μ,ε);
Wherein: l (δ, γ) is expressed as the difference between the hyperspectral image in the hyperspectral dataset and the hyperspectral image pair output by the corresponding model; l (μ, ε) is expressed as the gap between the low resolution image in the test and training data and the low resolution image pair after Lanczos downsampling; delta and gamma represent hyperspectral images in the hyperspectral dataset and hyperspectral images after superdivision, respectively, mu represents low resolution images in the test data and training data, epsilon represents low resolution images after Lanczos downsampling by gamma.
Wherein, formula L (x, y) is:
L(x,y)=L1+αLSSTV
Wherein: alpha is used to balance the loss function, set to 0.001; x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; l 1 is the average absolute error function; l SSTV is the spatial spectrum total variation function.
Further, the spatial spectrum total variation function L SSTV and the average absolute error L 1 are respectively:
Wherein: x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; Δ h is the horizontal gradient of y; Δ w is the vertical gradient of y; Δ c is the spectral gradient of y; n is the number of images.
Referring to fig. 3, the super-resolution restoration is performed on the low-resolution image with noise through three modes of traditional interpolation, a model trained by data non-amplification and a model trained by data amplification, and compared with the former two modes, the model with data amplification can effectively remove noise, and the spatial resolution is obviously improved.
Referring to fig. 4, the low resolution image is obtained by bilinear interpolation, the model after data amplification is clearer in boundary line compared with the former two modes, and in contrast, the image restored by the model of the unamplified data set has a foggy effect (the color image is more obvious).
In this embodiment, two public datasets CHIKUSEI and Pavia are used.
The data preprocessing comprises the following steps:
CHIKUSEI the dataset has 128 spectral bands, ranging from 363nm to 1018nm, with 2517×2335 pixels per spectrum; due to the lack of edge information, clipping is carried out on the central area of the image to obtain 2304 multiplied by 2048 multiplied by 128 sub-images; then training data and test data are obtained by further dividing, and the specific operation is as follows: extracting the top region of the sub-image to form test data, wherein the test data comprises four non-overlapping hyperspectral images with 512 multiplied by 128 pixels; the remaining area of the sub-image extracts the overlapped block as training data (10% of the data is taken as the validation set), the extracted overlapped block (overlappatches) has a size of 64 x 64 pixels allowing 32 pixels to overlap when the upsampling factor is 4 times, and the extracted patch has a size of 128 x 128 pixels allowing 64 pixels to overlap when the upsampling factor is 8 times.
The Pavia dataset has 102 spectral bands, each band having 1096×1096 pixels, but the partial areas do not contain valuable information, so that 1096×715 pixels are reserved for each band after the non-valuable area is removed, i.e. the central area of the image is cropped to obtain a sub-image of 1096×715×102 pixels. The image is then further divided into training data and test data, specifically operating as: extracting the left part of the image to form test data, wherein the test data comprises four non-overlapping hyperspectral images with 223 multiplied by 223 pixels; the overlap is extracted from the remaining region of the sub-image as training data (10% of the data is taken as the validation set), and the pixel overlap size and patch size are similar to CHIKUSEI.
Data amplification includes:
Hyperspectral images are affected by various degradation factors in the shooting process, and the hyperspectral imager, the imaging environment and the characteristics of the target are non-negligible factors, so that the quality of the space images is possibly reduced. Meanwhile, the hyperspectral image is acquired by a precise instrument, so that compared with a common image, the hyperspectral image has a limited number of training samples, and the limited number of samples cannot be used for training a model with better generalization capability.
The invention aims to solve the problems of single training sample and limited data size, and adopts various downsampling methods and Gaussian noise processing to simulate the blurring effect of hyperspectral degraded images in actual conditions. Firstly, training data and test data are obtained on CHIKUSEI data sets through data preprocessing, the pixel size of the training data is 1792 multiplied by 2048 multiplied by 128, the test data are four images with the pixel size of 512 multiplied by 128, then overlapping block extraction is carried out on the training data, the size of each overlapping block is 64 multiplied by 128 when the sampling factor is 4, and the size of each overlapping block is 128 multiplied by 128 when the sampling factor is 8.
All training and test data obtained above were downsampled using bicubic interpolation, bilinear interpolation and nearest neighbor interpolation, a low resolution image of the training data of 16 x 128 is obtained, and 4×128×128×128 testing a low resolution image of the data; and then in a low-resolution image obtained by downsampling bicubic interpolation, 1/3 data is taken to carry out noise pollution, the 1/3 data is continuously equally divided for simulating noise pollution of different degrees, when the sampling factor is 4 times, the two parts of data are respectively added with noise with the mean value of 0, the variance of 0.001 and the mean value of 0 and the variance of 0.002, and the data with the sampling factor of 8 times are respectively added with noise with the mean value of 0, the variance of 0.0001 and the mean value of 0.0002.
Compared with the original data, the data quantity is doubled, the amplified data is input into a neural network for training, the obtained model has stronger generalization capability, and particularly for noisy data, PSNR and SAM are respectively improved from 28.16 and 16.08 to 38.57 and 3.38.
The construction of the hyperspectral image super-resolution model comprises the following steps:
the branch-global space spectrum prior network comprises a plurality of branch networks and a global network which are connected in parallel, wherein the branch networks sequentially comprise a first 3*3 convolution layer, a first space spectrum deep layer feature extraction module, a first upsampling module and a first 1*1 convolution layer, and the global network comprises a second 3*3 convolution layer, a second space spectrum deep layer feature extraction module, a second sampling module and a second 1*1 convolution layer;
The first spatial spectrum deep feature extraction module and the second spatial spectrum deep feature extraction module both comprise a spatial residual error module and a spectral attention residual error module, and the images processed by the plurality of parallel branch networks are input into a global network to obtain a final super-resolution hyperspectral image.
Specifically, the input low-resolution images are firstly divided into a plurality of overlapped groups, and each group of data is input into a branch network for spatial spectrum characteristic extraction and amplified by adopting a smaller upsampling factor; the outputs of all branches are then connected and fed back to the global network for global spatial signature extraction and upsampling. In order to make each branch network and the spectrum deep feature extraction module in the global network share the same structure, a reconstruction layer is added after each branch up-sampling module, and the global residual structure is adopted to deepen the network structure.
Because of the large number of bands of hyperspectral images, the low resolution hyperspectral image (LR) data is first divided into S overlapping groups,Wherein the number of spectral bands (p) of each group is set to 8, the overlap (o) between adjacent groups is set to 2, and a 'rollback' segmentation strategy is adopted for effectively processing the 'edge' bands. When the last group of bands is less than the p bands, the last p bands are selected as the last group.
When the division of the overlapped groups is completed, the low-resolution images are respectively input into the corresponding branch networks, and each groupApplying a convolution layer to extract shallow features/>The spatial spectrum feature extraction network (SSPN) is then input for depth feature extraction. Wherein SSPN introduces long jump connection, extracts high frequency information more efficiently, and concatenates 3 Spatial Spectrum Blocks (SSB), each SSB comprises a spatial residual module and a spectral attention residual module, the former extracts spatial information by using 3X 3 convolution, the latter extracts spectral correlation by using 1X 1 convolution, and SSB calculates the spatial correlation by using
Wherein R represents a total of R SSB modules,Is the r-th SSB function,/>Is the input of the r-th SSB,/>Is the final extracted feature.
An up-sampling module is added in the middle of the network, namely before the branch SSPN is output to the global SSPN, an amplified characteristic diagram is obtained, and then a convolution layer (reconstruction layer) is added, so that the number of output characteristic channels is reduced to be the same as the number of frequency spectrums of an input group. So far, each branch can be regarded as a super-resolution reconstruction sub-network.
And finally, connecting the features extracted by the branch network by a global network, extracting shallow layer features by a convolution layer similar to local branches, sending the shallow layer features into a global SSPN, and then sequentially generating a final super-resolution hyperspectral image by an up-sampling module and a reconstruction layer.
Still further, in order to ensure the relevance and reliability between spectral features, the invention uses Spatial Spectral Total Variation (SSTV), expands the traditional total variation model, considers the relevance of space and spectrum, and can be used for balancing the reconstruction accuracy of the network because the L1 loss can keep good convergence in the training stage, so the target loss function is the weighted sum of SSTV loss and L1 loss. Meanwhile, in order to fully utilize the information relation between the low-resolution images, a Lanczos resampling filter is introduced to downsample the super-divided images, the loss calculation is carried out on the super-divided images and the original low-resolution images, and finally the super-divided images are added with the loss value of the high-resolution images.
The loss function formula is:
Ltotal=L(δ,γ)+L(μ,ε);
Wherein: l (δ, γ) is expressed as the difference between the hyperspectral image in the hyperspectral dataset and the hyperspectral image pair output by the corresponding model; l (μ, ε) is expressed as the gap between the low resolution image in the test and training data and the low resolution image pair after Lanczos downsampling; delta and gamma represent hyperspectral images in the hyperspectral dataset and hyperspectral images after superdivision, respectively, mu represents low resolution images in the test data and training data, epsilon represents low resolution images after Lanczos downsampling by gamma.
Wherein, formula L (x, y) is:
L(x,y)=L1+αLSSTV
Wherein: alpha is used to balance the loss function, set to 0.001; x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; l 1 is the average absolute error function; l SSTV is the spatial spectrum total variation function.
Further, the spatial spectrum total variation function L SSTV and the average absolute error L 1 are respectively:
Wherein: x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; Δ h is the horizontal gradient of y; Δ w is the vertical gradient of y; Δ c is the spectral gradient of y; n is the number of images.
In the training process, an ADAM optimizer is used, the initial learning rate is set to be 0.0001, the attenuation is ten times when 30 epochs are reached, the stability performance can be achieved after 40 epochs are iterated, the training batch size is 32 when the sampling factor is 4, and the training batch is set to be 16 when the sampling factor is 8.
Model prediction
After model training is completed, pre-training model parameters are loaded, pre-processed test data are input, super-resolution result indexes and npy format prediction data can be obtained, if data are visualized, npy format is converted into mat format, and finally an MATLAB tool is used for selecting any 3 wave bands as RGB three-way images to output visualized images.
The evaluation indexes of the model are peak signal-to-noise ratio (PSNR), structural Similarity (SSIM), root Mean Square Error (RMSE), cross Correlation (CC), spectrum Angle Mapping (SAM) and relative dimensionless global Error (ERGAS), wherein the first three indexes mainly evaluate the spatial dimension quality, the larger the numerical value is used for representing better image quality, the smaller the numerical value is used for representing smaller the spectral dimension quality, and the lower the numerical value is used for representing smaller the spectral dimension error. The predictive performance of the algorithm was evaluated at CHIKUSEI and PAVIA datasets, and the method of the invention achieved good reconstruction in both spatial and spectral dimensions. The experimental results are shown in tables 1,2, 3 and 4.
Table 1 the present invention proposes the comparison of the predictive performance of different degraded pictures after the amplification of the dataset based on the original loss function:
Table 2 the prediction performance of the amplification dataset method proposed by the present invention for different degraded pictures based on the improved loss function compares the results:
as shown in table 1 and table 2, on the public dataset CHIKUSEI, the effect of the data amplification of the invention is verified, and table 1 adopts an unmodified loss function after the data amplification, namely, directly uses a high-resolution image to perform loss calculation, and table 2 adds a low-resolution image and a downsampled low-resolution image loss calculation, so that no matter how the loss function is changed, the model after the data set is amplified has stronger robustness, and has obvious effects on an image with gaussian noise and images obtained by three different downsampling. Because the amplification is carried out on the basis of Bicubic interpolation, the specific gravity of Bicubic interpolation training is reduced, so that the effect is slightly reduced, but the spatial index and the spectral index of other three images are greatly improved, and especially, the PSNR (particle-space ratio) can be improved by nearly 40% after the image amplification dataset with Gaussian noise is added.
Table 3 the loss function method proposed by the present invention predicts the performance comparison results:
as shown in Table 3, the loss function verification is performed on CHIKUSEI, pavia data sets (non-amplified data sets), the improved loss function of the invention has better effect on each index on the two data sets, PSNR can be improved by 0.02-0.2, SSIM can be improved by 0.02-0.1 and SAM can be reduced by 0.02-0.3 on different sampling factors.
Table 4 the method proposed by the present invention predicts the performance contrast results comprehensively:
Table 4 shows that, to verify the effect of the method provided by the invention on the comprehensive index, the effect is greatly improved by changing the loss function on the basis of the amplified data set compared with the original data set and the loss function, and the data prove that the method has great significance in improving the quality of the space dimension and reducing the loss of the spectrum dimension.
The super-resolution restoration process of the hyperspectral image comprises the following steps: preprocessing data and amplifying the data (only applied to a training stage), wherein the data amplifying part adopts the method of the invention to perform downsampling by using a plurality of methods such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation and the like, and meanwhile, gaussian noise with different standard deviations is added to achieve the purpose of amplifying the data; then dividing the data into different sizes according to the type of the data set and the scale factor, and sending the data into a neural network, wherein in the process, the improved SSTV loss function (only applied to a training stage) is adopted; finally, predicted high-resolution image data and indexes can be obtained, the image data is subjected to post-processing, and three wavebands are randomly selected as RGB channels to obtain a visualized image.
Referring to fig. 5 and 6, the effect of the improvement of the verification loss function on CHIKUSEI, pavia datasets is shown, the restored image of the invention has higher definition and better high-frequency information recovery compared with the traditional interpolation method.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A hyperspectral image super-resolution restoration method based on multi-degradation data augmentation is characterized by comprising the following steps:
Acquiring a hyperspectral image to obtain a hyperspectral data set;
Preprocessing the hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule to generate non-overlapped subgraphs and overlapped blocks, and taking the subgraphs and the overlapped blocks as test data and training data respectively;
Performing data amplification on the test data and the training data in a degradation mode to obtain low-resolution test data and training data, wherein bicubic interpolation, nearest neighbor interpolation and bilinear interpolation are adopted to respectively perform downsampling on the test data and the training data so as to obtain a low-resolution image; the low-resolution image obtained by bicubic interpolation is subjected to Gaussian noise treatment again;
constructing a hyperspectral image super-resolution model;
Training the hyperspectral image super-resolution model by adopting the training data, so as to obtain a trained hyperspectral image super-resolution model;
Inputting the test data into the trained hyperspectral image super-resolution model to obtain a hyperspectral image with super resolution;
the method comprises the steps of constructing a hyperspectral image super-resolution model;
Inputting the training data into a branch-global space spectrum prior network to obtain a hyperspectral image after super-division, and downsampling the hyperspectral image after super-division through a Lanczos resampling filter to obtain a Lanczos downsampled image;
Obtaining a loss function according to the loss between the hyperspectral image and the hyperspectral image after hyperspectral data set and the loss between the Lanczos downsampled image and the amplified test data and the amplified images in the training data, wherein the loss function formula is as follows:
Ltotal=L(δ,γ)+L(μ,ε);
Wherein: l (δ, γ) is expressed as the difference between the hyperspectral image in the hyperspectral dataset and the hyperspectral image pair output by the corresponding model; l (μ, ε) is represented as the gap between the low resolution images in the test data and the training data and the low resolution image pair after Lanczos downsampling; delta and gamma represent hyperspectral images in the hyperspectral dataset and hyperspectral images after superdivision respectively, mu represents low resolution images in the test data and the training data, epsilon represents low resolution images after Lanczos downsampling by gamma;
The formula L (x, y) is:
L(x,y)=L1+αLSSTV
Wherein: alpha is used to balance the loss function, set to 0.001; x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; l 1 is the average absolute error function; l SSTV is a spatial spectrum total variation function;
The spatial spectrum total variation function L SSTV and the average absolute error function L 1 are respectively:
Wherein: x is denoted as a picture in the hyperspectral dataset, the test data and the training data; y is expressed as a hyperspectral image after super division and a low-resolution image after Lanczos downsampling after super division; Δ h is the horizontal gradient of y; Δ w is the vertical gradient of y; Δ c is the spectral gradient of y; n is the number of images;
determining the gradient of a convolution layer in the branch-global space spectrum prior network according to the loss function and the gradient descent method;
And according to the gradient of the convolution layer in the branch-global spatial spectrum prior network, iterative training is performed by using an Adam optimizer until PSNR, SSIM, SAM indexes of the branch-global spatial spectrum prior network are not improved any more, and training is completed.
2. The hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation as recited in claim 1, wherein the branch-global spatial spectrum prior network comprises a plurality of branch networks and a global network, the branch networks sequentially comprise a first 3*3 convolution layer, a first spatial spectrum deep feature extraction module, a first up-sampling module and a first 1*1 convolution layer, and the global network comprises a second 3*3 convolution layer, a second spatial spectrum deep feature extraction module, a second sampling module and a second 1*1 convolution layer;
the first spatial spectrum deep feature extraction module and the second spatial spectrum deep feature extraction module both comprise a spatial residual error module and a spectral attention residual error module, and the images processed by the plurality of parallel branch networks are input into the global network to obtain a final super-resolution hyperspectral image.
CN202111342185.0A 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation Active CN114266957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111342185.0A CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111342185.0A CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Publications (2)

Publication Number Publication Date
CN114266957A CN114266957A (en) 2022-04-01
CN114266957B true CN114266957B (en) 2024-05-07

Family

ID=80825184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111342185.0A Active CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Country Status (1)

Country Link
CN (1) CN114266957B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998109B (en) * 2022-08-03 2022-10-25 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN116310959B (en) * 2023-02-21 2023-12-08 南京智蓝芯联信息科技有限公司 Method and system for identifying low-quality camera picture in complex scene
CN117036162B (en) * 2023-06-19 2024-02-09 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793883A (en) * 2013-12-11 2014-05-14 北京工业大学 Principal component analysis-based imaging spectral image super resolution restoration method
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN111696043A (en) * 2020-06-10 2020-09-22 上海理工大学 Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793883A (en) * 2013-12-11 2014-05-14 北京工业大学 Principal component analysis-based imaging spectral image super resolution restoration method
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN111696043A (en) * 2020-06-10 2020-09-22 上海理工大学 Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN

Also Published As

Publication number Publication date
CN114266957A (en) 2022-04-01

Similar Documents

Publication Publication Date Title
CN114266957B (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
Zhao et al. Hierarchical regression network for spectral reconstruction from RGB images
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN104112263B (en) The method of full-colour image and Multispectral Image Fusion based on deep neural network
CN110443768B (en) Single-frame image super-resolution reconstruction method based on multiple consistency constraints
Chierchia et al. A nonlocal structure tensor-based approach for multicomponent image recovery problems
Zhang et al. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising
Xie et al. Deep convolutional networks with residual learning for accurate spectral-spatial denoising
Kato et al. Multi-frame image super resolution based on sparse coding
CN111507462A (en) End-to-end three-dimensional medical image super-resolution reconstruction method and system
Kasem et al. Spatial transformer generative adversarial network for robust image super-resolution
Huang et al. Lightweight deep residue learning for joint color image demosaicking and denoising
Cao et al. New architecture of deep recursive convolution networks for super-resolution
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN111696043A (en) Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN
Yang et al. License plate image super-resolution based on convolutional neural network
Hang et al. Prinet: A prior driven spectral super-resolution network
CN111275620B (en) Image super-resolution method based on Stacking integrated learning
CN111899166A (en) Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning
Cheng et al. Remote sensing image super-resolution using multi-scale convolutional sparse coding network
CN114092327B (en) Hyperspectral image super-resolution method utilizing heterogeneous knowledge distillation
CN113781306B (en) Super-resolution reconstruction method for hyperspectral image based on double-stage strategy
CN113962943B (en) Hyperspectral change detection method based on bidirectional reconstruction coding network and reinforced residual error network
CN114638761A (en) Hyperspectral image panchromatic sharpening method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant