CN114266957A - Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation - Google Patents

Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation Download PDF

Info

Publication number
CN114266957A
CN114266957A CN202111342185.0A CN202111342185A CN114266957A CN 114266957 A CN114266957 A CN 114266957A CN 202111342185 A CN202111342185 A CN 202111342185A CN 114266957 A CN114266957 A CN 114266957A
Authority
CN
China
Prior art keywords
data
resolution
hyperspectral
image
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111342185.0A
Other languages
Chinese (zh)
Other versions
CN114266957B (en
Inventor
王素玉
车其晓
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111342185.0A priority Critical patent/CN114266957B/en
Publication of CN114266957A publication Critical patent/CN114266957A/en
Application granted granted Critical
Publication of CN114266957B publication Critical patent/CN114266957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation, which belongs to the technical field of image processing and comprises the steps of obtaining a hyperspectral image, preprocessing a hyperspectral data set, cutting the image in the hyperspectral data set according to a certain rule, generating non-overlapped subgraphs and overlapped blocks, and respectively using the subgraphs and the overlapped blocks as test data and training data; performing data amplification on the test data and the training data and increasing Gaussian noise to obtain noisy low-resolution test data and noisy low-resolution training data; constructing a hyper-resolution model of the hyper-spectral image; training the hyperspectral image super-resolution model by adopting training data so as to obtain a trained hyperspectral image super-resolution model; and inputting the test data into the trained hyperspectral image super-resolution model to obtain a hyperspectral image with super-resolution. The method provided by the invention is used for amplifying data, and the performance of the hyperspectral image super-resolution model is improved.

Description

Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation.
Background
In recent years, the local action fluctuation and natural disaster frequently occur in multiple countries, and advanced military reconnaissance technology, environment monitoring and resource detection technology are particularly important, so that the ground detection technology is in the core position in each country, and the hyperspectral remote image is a high-dimensional image, has the advantages of identifying weak information and quantitatively detecting, and has high application value in the military and civil fields. Although the traditional hyperspectral image super-resolution method based on fusion has a certain effect, the method requires that the input low-resolution hyperspectral image and the high-resolution auxiliary image can be well registered, and the well-registered auxiliary image is very difficult to obtain in an actual application scene; in order to solve the problem that auxiliary information is not easy to obtain, a sparse dictionary learning or low rank approximation method is adopted, however, the manual prior information can only reflect partial characteristics of data. With the development of deep learning in recent years, a Convolutional Neural Network (CNN) has shown a strong advantage in mapping relationship and modeling capability between a low-resolution image and a high-resolution image compared with a conventional method.
At present, a super-resolution restoration algorithm of a hyperspectral image based on CNN is greatly developed, for example, a GDRNN method for dividing adjacent wave bands into a plurality of groups and carrying out intra-group fusion and inter-group fusion mechanisms; a FastHyDe method associated with its low rank and self-similarity features using sparse representation of images; a Deep Prior (Deep Prior) method is extended to the field of Hyperspectral imaging and a Deep Hyperspectral Prior method of a three-dimensional convolution network. Although the methods for deep learning are greatly improved compared with the traditional methods, most of the methods have the problems of few training samples, single sampling method, weak spatial and spectral correlation and the like. The hyperspectral image is easily affected by various degradation factors in the acquisition process, a hyperspectral imager, an imaging environment, an imaging object and the like are all key factors influencing the imaging quality, and different degradation factors can cause the difference of acquired information, so that most of the existing methods can generate the problems of poor data robustness and the like for different degradation factors.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for restoring a super-resolution of a hyperspectral image based on data augmentation in multiple degradation modes, comprising:
acquiring a hyperspectral image to obtain a hyperspectral data set;
preprocessing the hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule, generating non-overlapped subgraphs and overlapped blocks, and respectively using the subgraphs and the overlapped blocks as test data and training data;
performing data amplification on the test data and the training data in a degradation mode to obtain the test data and the training data with low resolution;
constructing a hyper-resolution model of the hyper-spectral image;
training the hyperspectral image super-resolution model by adopting the training data so as to obtain a trained hyperspectral image super-resolution model;
inputting the test data into the trained hyperspectral image super-resolution model to obtain a hyperspectral image with super-resolution;
constructing a hyperspectral image super-resolution model comprises the following steps of;
inputting the training data into a branch-global spatial spectrum prior network to obtain a hyper-divided hyperspectral image, and performing down-sampling on the hyper-divided hyperspectral image through a Lanczos resampling filter to obtain a Lanczos down-sampling image;
obtaining a loss function according to the loss between a hyperspectral image in the hyperspectral data set and the hyperspectral image after hyper-differentiation and the loss between the Lanczos downsampling image and the images in the test data and the training data after amplification;
determining the gradient of a convolution layer in the branch-global spatial spectrum prior network according to the loss function and a gradient descent method;
and according to the gradient of the convolution layer in the branch-global space spectrum prior network, using an Adam optimizer for iterative training until PSNR (Peak signal to noise ratio), SSIM (small Scale integration) and SAM (sample access) indexes of the branch-global space spectrum prior network are not improved, and finishing training.
Preferably, the branch-global spatial spectrum prior network comprises a plurality of branch networks and a global network which are connected in parallel, the branch networks sequentially comprise a first 3 × 3 convolutional layer, a first spatial spectrum deep feature extraction module, a first up-sampling module and a first 1 × 1 convolutional layer, and the global network comprises a second 3 × 3 convolutional layer, a second spatial spectrum deep feature extraction module, a second sampling module and a second 1 × 1 convolutional layer;
the first spatial spectrum deep layer feature extraction module and the second spatial spectrum deep layer feature extraction module respectively comprise a spatial residual error module and a spectrum attention residual error module, and images processed by a plurality of parallel branch networks are input into the global network to obtain a final super-resolution hyperspectral image.
Preferably, the performing data amplification on the test data and the training data in a degradation manner to obtain the test data and the training data with low resolution includes:
respectively performing down-sampling on the test data and the training data by adopting bicubic interpolation, nearest neighbor interpolation and bilinear interpolation to obtain a low-resolution image;
wherein, the low resolution image obtained by adopting bicubic interpolation needs to be subjected to Gaussian noise processing again.
Preferably, the loss function is formulated as:
Ltotal=L(δ,γ)+L(μ,ε);
in the formula: therefore, L (delta, gamma) represents the difference between the hyperspectral image in the hyperspectral data set and the hyperspectral image after the hyperspectral image corresponding to the output of the model is subjected to the hyperspectral image; l (mu, epsilon) is expressed as the difference between the pair of low-resolution images in the test data and the training data and the pair of low-resolution images after Lanczos down-sampling; and delta and gamma respectively represent a hyperspectral image and a hyperspectral image after hyper-differentiation in a hyperspectral data set, mu represents a low-resolution image in the test data and the training data, and epsilon represents a low-resolution image after Lanczos down-sampling is carried out by gamma.
Preferably, the formula L (x, y) is:
L(x,y)=L1+αLSSTV
in the formula: α is used to balance the loss function, set to 0.001; x is represented as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; l is1Is a function of the mean absolute error; l isSSTVIs a spatial spectrum total variation function.
Preferably, the spatial spectrum total variation function LSSTVAnd said mean absolute error L1Respectively as follows:
Figure BDA0003352567650000031
Figure BDA0003352567650000041
in the formula: x is represented as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; deltahA horizontal gradient of y; deltawA vertical gradient of y; deltacA spectral gradient of y; n is the number of images.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the data are amplified in a multi-degradation mode, low-quality image amplification caused by imaging equipment, weather and the like under a real condition can be simulated as much as possible, the loss function is the loss between the hyperspectral images in the hyperspectral data set and the hyperspectral images after the hyperspectral images are subjected to the overdivies and the loss between the Lanczos downsampling images and the images in the test data and the training data after the amplification, and the performance of the hyperspectral image super-resolution model is improved.
Drawings
FIG. 1 is a flow chart of a hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation according to the invention;
FIG. 2 is a schematic diagram of the branch-global spatial spectrum prior network of the present invention;
FIG. 3 is a graph comparing the effect of a degradation mode of the present invention with prior art recovery;
FIG. 4 is a graph comparing the effect of another degradation mode of the present invention with prior art restoration;
FIG. 5 is a graph comparing the effect of CHIKUSEI data set restoration in the present invention with that of the prior art;
fig. 6 is a graph comparing the effect of the Pavia dataset on restoration of the present invention with the prior art.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, a method for restoring super-resolution of a hyperspectral image based on multi-degradation mode data augmentation includes:
acquiring a hyperspectral image to obtain a hyperspectral data set;
preprocessing a hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule, generating non-overlapped subgraphs and overlapped blocks, and respectively using the subgraphs and the overlapped blocks as test data and training data;
specifically, the peripheral images of the images in the hyperspectral data set are cut off, and the images of the central area are left; then, cutting an image of the central area to generate a plurality of non-overlapped subgraphs as test data; extracting overlapped blocks from the residual part after cutting as training data;
performing data amplification on the test data and the training data in a degradation mode to obtain low-resolution test data and training data;
specifically, respectively performing down-sampling on test data and training data by adopting bicubic interpolation, nearest neighbor interpolation and bilinear interpolation to obtain a low-resolution image;
wherein, the low resolution image obtained by adopting bicubic interpolation needs to be subjected to Gaussian noise processing again.
Constructing a hyper-resolution model of the hyper-spectral image;
training the hyperspectral image super-resolution model by adopting training data so as to obtain a trained hyperspectral image super-resolution model;
inputting test data into a trained hyperspectral image super-resolution model to obtain a hyperspectral image with super-resolution;
constructing a hyperspectral image super-resolution model comprises the following steps of;
inputting training data into a branch-global spatial spectrum prior network to obtain a hyper-divided hyperspectral image, and performing down-sampling on the hyper-divided hyperspectral image through a Lanczos resampling filter to obtain a Lanczos down-sampling image;
obtaining a loss function according to the loss between the hyperspectral images in the hyperspectral data set and the hyperspectral images after hyper-differentiation and the loss between the Lancaos down-sampling images and the images in the test data and the training data after amplification;
determining the gradient of a convolution layer in a branch-global space spectrum prior network according to a loss function and a gradient descent method;
and (3) according to the gradient of the convolution layer in the branch-global space spectrum prior network, using an Adam optimizer for iterative training until PSNR (Peak signal to noise ratio), SSIM (small Scale integration) and SAM (sample access) indexes of the branch-global space spectrum prior network are not improved, and finishing the training.
Referring to fig. 2, the branch-global spatial spectrum prior network includes a plurality of branch networks and a global network connected in parallel, the branch networks sequentially include a first 3 × 3 convolutional layer, a first spatial spectrum deep feature extraction module, a first up-sampling module, and a first 1 × 1 convolutional layer, and the global network includes a second 3 × 3 convolutional layer, a second spatial spectrum deep feature extraction module, a second sampling module, and a second 1 × 1 convolutional layer;
the first spatial spectrum deep layer feature extraction module and the second spatial spectrum deep layer feature extraction module respectively comprise a spatial residual error module and a spectrum attention residual error module, and images processed by a plurality of parallel branch networks are input into a global network to obtain a final super-resolution hyperspectral image.
Still further, the loss function is formulated as:
Ltotal=L(δ,γ)+L(μ,ε);
in the formula: therefore, L (delta, gamma) represents the difference between the hyperspectral image in the hyperspectral data set and the hyperspectral image after the hyperspectral image corresponding to the output of the model is subjected to the hyperspectral image; l (mu, epsilon) represents the difference between the low-resolution images in the test data and the training data and the low-resolution image pair subjected to Lanczos down-sampling; delta and gamma respectively represent a hyperspectral image in the hyperspectral data set and a hyperspectral image after hyper-resolution, mu represents a low-resolution image in test data and training data, and epsilon represents a low-resolution image after Lanczos down-sampling is carried out on gamma.
Wherein, the formula L (x, y) is:
L(x,y)=L1+αLSSTV
in the formula: α is used to balance the loss function, set to 0.001; x is expressed as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; l is1Is a function of the mean absolute error; l isSSTVIs a spatial spectrum total variation function.
Further, the spatial spectrum total variation function LSSTVAnd the mean absolute error L1Respectively as follows:
Figure BDA0003352567650000061
Figure BDA0003352567650000062
in the formula: x is expressed as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; deltahA horizontal gradient of y; deltawA vertical gradient of y; deltacA spectral gradient of y; n is the number of images.
Referring to fig. 3, the low-resolution image with noise is subjected to super-resolution restoration by three modes of traditional interpolation, a model with data not amplified and trained and a model with data amplified and trained, and compared with the former two modes, the model with data amplified can effectively remove noise, and the spatial resolution is obviously improved.
Referring to fig. 4, the low-resolution image is obtained by two-line interpolation, and the boundary line of the model after data amplification is clearer than that of the former two modes, and in contrast, the image restored by the model without data amplification has a fog face effect (the color image is more obvious).
In this example, two public datasets, CHIKUSEI and Pavia, were used.
The data preprocessing comprises the following steps:
the CHIKUSEI data set has 128 spectral bands ranging from 363nm to 1018nm, each spectrum having 2517 × 2335 pixels; because the edge information is lost, the central area of the image is cut to obtain sub-images of 2304 multiplied by 2048 multiplied by 128; then, further dividing the training data and the test data to obtain training data and test data, wherein the specific operation is as follows: extracting the top area of the sub-image to form test data, wherein the test data comprises four non-overlapped hyperspectral images with 512 multiplied by 128 pixels; the remaining area of the sub-image extracts the overlapped block as training data (10% of the data is taken as a verification set), the size of the extracted overlapped block (overlap patches) is 64 × 64 pixels when the upsampling factor is 4 times, allowing 32 pixels to overlap, and the size of the extracted patch is 128 × 128 pixels when the upsampling factor is 8 times, allowing 64 pixels to overlap.
The Pavia dataset has 102 spectral bands with 1096 × 1096 pixels per band, but the fractional region does not contain valuable information, so that 1096 × 715 pixels per band are retained after the removal of the non-valuable region, i.e., the central region of the image is cropped to obtain a 1096 × 715 × 102 sub-image. Then, the training data and the test data are further divided for the image, and the specific operation is as follows: extracting the left part of the image to form test data, wherein the test data comprises four non-overlapped hyperspectral images with 223 x 223 pixels; the fast overlap is extracted from the remaining area of the sub-image as training data (taking 10% of the data as the validation set), the pixel overlap size and patch size are similar to CHIKUSEI.
Data amplification includes:
the hyperspectral image can be affected by various degradation factors in the shooting process, and the characteristics of the hyperspectral imager, the imaging environment and the target are all non-negligible factors, which can cause the quality degradation of the spatial image. Meanwhile, the acquisition of the hyperspectral images is based on a precision instrument, so that compared with common images, the hyperspectral images are limited in training sample number, and the limited sample number cannot train a model with good generalization capability.
In order to solve the problems of single training sample and limited data quantity, the invention adopts various down-sampling methods and Gaussian noise processing to simulate the fuzzy effect of the hyperspectral degraded image in the actual situation. Firstly, training data and test data are obtained through data preprocessing on a CHIKUSEI data set, the pixel size of the training data is 1792 multiplied by 2048 multiplied by 128, the test data is four images with the pixel size of 512 multiplied by 128, then overlapped block extraction is carried out on the training data, when the sampling factor is 4, the size of each overlapped block is 64 multiplied by 128, and when the sampling factor is 8, the size of each overlapped block is 128 multiplied by 128.
Down-sampling all the training and test data obtained above using bicubic interpolation, bilinear interpolation and nearest neighbor interpolation to obtain a low resolution image of 16 × 16 × 128 training data and a low resolution image of 4 × 128 × 128 × 128 test data; then, in a low-resolution image obtained by down-sampling in bicubic interpolation, 1/3 data is taken to carry out noise pollution on the low-resolution image, in order to simulate different degrees of noise pollution, the 1/3 data is divided equally in two halves, when the sampling factor is 4 times, the two parts of data are respectively added with noise with the mean value of 0, the variance of 0.001, the mean value of 0.002 and the variance of 0.002, and for data with the sampling factor of 8 times, noise with the mean value of 0, the variance of 0.0001, the mean value of 0 and the variance of 0.0002 are respectively added.
Compared with the original data, the data volume is doubled, the amplified data is input into a neural network for training, the obtained model has stronger generalization capability, and particularly for noisy data, the PSNR and the SAM are respectively improved from 28.16 and 16.08 to 38.57 and 3.38.
The method for constructing the hyperspectral image super-resolution model comprises the following steps:
the branch-global spatial spectrum prior network comprises a plurality of branch networks and a global network which are connected in parallel, the branch networks sequentially comprise a first 3 x 3 convolutional layer, a first spatial spectrum deep feature extraction module, a first up-sampling module and a first 1 x 1 convolutional layer, and the global network comprises a second 3 x 3 convolutional layer, a second spatial spectrum deep feature extraction module, a second sampling module and a second 1 x 1 convolutional layer;
the first spatial spectrum deep layer feature extraction module and the second spatial spectrum deep layer feature extraction module respectively comprise a spatial residual error module and a spectrum attention residual error module, and images processed by a plurality of parallel branch networks are input into a global network to obtain a final super-resolution hyperspectral image.
Specifically, an input low-resolution image is firstly divided into a plurality of overlapping groups, each group of data is input into a branch network for spatial spectral feature extraction, and a smaller up-sampling factor is adopted for amplification; and then the outputs of all the branches are connected and fed back to a global network for global space spectrum feature extraction and up-sampling. In order to enable the spectrum deep layer feature extraction modules in each branch network and the global network to share the same structure, a reconstruction layer is added behind the sampling module on each branch, and a global residual error structure is adopted to deepen the network structure.
Since the high-spectrum image has more wave bands, the low-resolution high-spectrum image (LR) data is divided into the low-resolution high-spectrum image (LR) data in advance when being inputThe number of the overlapping groups is S,
Figure BDA0003352567650000091
where the number of spectral bands (p) per group is set to 8 and the overlap (o) between adjacent groups is set to 2, a "back-off" segmentation strategy is employed for efficient processing of "edge" bands. When the last group of spectral bands is smaller than the p-band, the last p-bands are selected as the last group.
When the division of the overlapped groups is finished, the low-resolution images are respectively input into corresponding branch networks, and each group
Figure BDA0003352567650000092
Extracting shallow features using a convolutional layer
Figure BDA0003352567650000093
And then inputting the data into a space spectrum feature extraction network (SSPN) for deep feature extraction. The SSPN introduces long jump connection, high-frequency information is extracted more efficiently, 3 Space Spectrum Blocks (SSBs) are cascaded, each SSB comprises a space residual error module and a spectrum attention residual error module, the space residual error module extracts space information by utilizing 3 multiplied by 3 convolution, the spectrum attention residual error module extracts spectrum correlation by utilizing 1 multiplied by 1 convolution, and the SSB calculation mode is that
Figure BDA0003352567650000094
Wherein R represents a total of R SSB modules,
Figure BDA0003352567650000097
is the r-th function of the SSB,
Figure BDA0003352567650000095
is the input to the r-th SSB,
Figure BDA0003352567650000096
is the final extracted feature.
In the middle of the network, namely before the branch SSPN is output to the global SSPN, an up-sampling module is added to obtain an amplified characteristic diagram, and then a convolution layer (reconstruction layer) is added to reduce the number of output characteristic channels to be the same as the number of frequency spectrums of an input group. So far, each branch can be regarded as a super-resolution reconstruction sub-network.
The features extracted by the branch network are finally connected by a global network, similar to local branches, shallow layer features are extracted by a convolution layer, then the shallow layer features are sent to a global SSPN, and then the final super-resolution hyperspectral image is generated through an up-sampling module and a reconstruction layer in sequence.
Still further, in order to ensure the relevance and the reliability between spectral features, the invention uses the Spatial Spectrum Total Variation (SSTV), expands the traditional total variation model, considers the correlation between the space and the spectrum, and can keep good convergence in the training phase due to the loss of L1, so as to balance the reconstruction accuracy of the network, and therefore, the target loss function is the weighted sum of the SSTV loss and the L1 loss. Meanwhile, in order to fully utilize the information relationship between the low-resolution images, a Lanczos resampling filter is introduced to carry out down-sampling on the over-divided images, the over-sampled images are subjected to loss calculation with the original low-resolution images, and finally the loss calculation is added with the loss value of the high-resolution images.
The loss function is formulated as:
Ltotal=L(δ,γ)+L(μ,ε);
in the formula: therefore, L (delta, gamma) represents the difference between the hyperspectral image in the hyperspectral data set and the hyperspectral image after the hyperspectral image corresponding to the output of the model is subjected to the hyperspectral image; l (mu, epsilon) represents the difference between the low-resolution images in the test data and the training data and the low-resolution image pair subjected to Lanczos down-sampling; delta and gamma respectively represent a hyperspectral image in the hyperspectral data set and a hyperspectral image after hyper-resolution, mu represents a low-resolution image in test data and training data, and epsilon represents a low-resolution image after Lanczos down-sampling is carried out on gamma.
Wherein, the formula L (x, y) is:
L(x,y)=L1+αLSSTV
in the formula: α is used to balance the loss function, set to 0.001; x is expressed as a hyperspectral data set, surveyPictures in the trial data and training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; l is1Is a function of the mean absolute error; l isSSTVIs a spatial spectrum total variation function.
Further, the spatial spectrum total variation function LSSTVAnd the mean absolute error L1Respectively as follows:
Figure BDA0003352567650000101
Figure BDA0003352567650000102
in the formula: x is expressed as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; deltahA horizontal gradient of y; deltawA vertical gradient of y; deltacA spectral gradient of y; n is the number of images.
An ADAM optimizer is used in the training process, the initial learning rate is set to be 0.0001, the attenuation is ten times when 30 epochs are reached, the stability can be achieved after 40 epochs are iterated, the size of a training batch is 32 when the sampling factor is 4, and the training batch is set to be 16 when the sampling factor is 8.
Model prediction
After model training is completed, pre-training model parameters are loaded, preprocessed test data are input, super-resolution result indexes and prediction data in npy format can be obtained, if data are visualized, npy format needs to be converted into mat format, and finally an MATLAB tool is used for selecting any 3 wave bands as RGB three channels to output visualized images.
The evaluation indexes of the model are peak signal to noise ratio (PSNR), Structural Similarity (SSIM), Root Mean Square Error (RMSE), Cross Correlation (CC), Spectral Angle Mapping (SAM) and relatively dimensionless global Error (ERGAS), the first three indexes mainly aim at evaluating space dimensional quality, the larger the numerical value is, the better the image quality is, the later three indexes evaluate spectral dimensional quality, and the smaller the numerical value is, the smaller the spectral dimensional error is. The prediction performance of the algorithm is evaluated in CHIKUSEI and PAVIA data sets, and the method obtains good reconstruction effect in both space dimension and spectral dimension. The results of the experiments are shown in tables 1, 2, 3 and 4.
Table 1 comparison of prediction performance for different degraded pictures after the present invention proposes an amplification data set method based on the original loss function:
Figure BDA0003352567650000111
table 2 comparison of the prediction performance of the amplification data set method proposed based on the improved loss function according to the present invention for different degraded pictures:
Figure BDA0003352567650000112
Figure BDA0003352567650000121
as shown in tables 1 and 2, the effect of data amplification of the present invention is verified on the public data set CHIKUSEI, table 1 is that an unmodified loss function is adopted after data amplification, that is, a high resolution image is directly used for loss calculation, table 2 is added with low resolution image and low resolution image loss calculation after down sampling, and it can be seen that no matter how the loss function is changed, the model after data amplification has stronger robustness, and has obvious effects on images with gaussian noise and images obtained by three different down sampling. Because the amplification is carried out on the basis of Bicubic interpolation, which is equivalent to the reduction of the proportion of Bicubic interpolation training, the effect is slightly reduced, but the spatial index and the spectral index of other three images are greatly improved, and particularly the PSNR can be improved by nearly 40 percent after an image amplification data set with Gaussian noise is added.
Table 3 the loss function method proposed by the present invention predicts the performance comparison results:
Figure BDA0003352567650000122
as shown in Table 3, the verification of the loss function is carried out on two data sets (i.e., CHIKUSEI and Pavia) and the improved loss function of the invention has better effect on each index of the two data sets, the PSNR can be improved by 0.02-0.2, the SSIM can be improved by 0.02-0.1 and the SAM can be reduced by 0.02-0.3 on different sampling factors.
Table 4 the method proposed by the present invention synthesizes the predicted performance comparison results:
Figure BDA0003352567650000123
Figure BDA0003352567650000131
table 4 shows that, in order to verify the effect of the method provided by the present invention in the comprehensive index, the loss function is modified on the basis of the amplification data set, and compared with the original data set and the loss function, the effect is greatly improved, and the data proves that the method of the present invention has great significance in improving the spatial dimensional quality and reducing the spectral dimensional loss.
The hyperspectral image super-resolution restoration process comprises the following steps: preprocessing and data amplification are carried out on data (only applied to a training stage), wherein the data amplification part adopts various methods such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation and the like to carry out downsampling, and simultaneously Gaussian noises with different standard deviations are added to achieve the purpose of amplifying the data; then, the data sets are divided into different sizes according to the types and the scale factors of the data sets and sent to a neural network, and in the process, the SSTV loss function improved by the method is adopted (only applied to a training stage); and finally, obtaining predicted high-resolution image data and indexes, carrying out post-processing on the image data, and randomly selecting three wave bands as RGB channels to obtain a visual image.
Referring to fig. 5 and 6, for verifying the improved effect of the loss function on the CHIKUSEI and Pavia data sets, respectively, the restored image has higher definition and better high-frequency information recovery compared with the conventional interpolation method.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation is characterized by comprising the following steps:
acquiring a hyperspectral image to obtain a hyperspectral data set;
preprocessing the hyperspectral data set, cutting images in the hyperspectral data set according to a certain rule, generating non-overlapped subgraphs and overlapped blocks, and respectively using the subgraphs and the overlapped blocks as test data and training data;
performing data amplification on the test data and the training data in a degradation mode to obtain the test data and the training data with low resolution;
constructing a hyper-resolution model of the hyper-spectral image;
training the hyperspectral image super-resolution model by adopting the training data so as to obtain a trained hyperspectral image super-resolution model;
inputting the test data into the trained hyperspectral image super-resolution model to obtain a hyperspectral image with super-resolution;
constructing a hyperspectral image super-resolution model comprises the following steps of;
inputting the training data into a branch-global spatial spectrum prior network to obtain a hyper-divided hyperspectral image, and performing down-sampling on the hyper-divided hyperspectral image through a Lanczos resampling filter to obtain a Lanczos down-sampling image;
obtaining a loss function according to the loss between a hyperspectral image in the hyperspectral data set and the hyperspectral image after hyper-differentiation and the loss between the Lanczos downsampling image and the images in the test data and the training data after amplification;
determining the gradient of a convolution layer in the branch-global spatial spectrum prior network according to the loss function and a gradient descent method;
and according to the gradient of the convolution layer in the branch-global space spectrum prior network, using an Adam optimizer for iterative training until PSNR (Peak signal to noise ratio), SSIM (small Scale integration) and SAM (sample access) indexes of the branch-global space spectrum prior network are not improved, and finishing training.
2. The multi-degradation-mode-data-augmentation-based hyperspectral image super-resolution restoration method according to claim 2, wherein the branch-global spatial spectrum prior network comprises a plurality of branch networks and a global network which are connected in parallel, the branch networks sequentially comprise a first 3 x 3 convolutional layer, a first spatial spectrum deep feature extraction module, a first up-sampling module and a first 1 x 1 convolutional layer, and the global network comprises a second 3 x 3 convolutional layer, a second spatial spectrum deep feature extraction module, a second sampling module and a second 1 x 1 convolutional layer;
the first spatial spectrum deep layer feature extraction module and the second spatial spectrum deep layer feature extraction module respectively comprise a spatial residual error module and a spectrum attention residual error module, and images processed by a plurality of parallel branch networks are input into the global network to obtain a final super-resolution hyperspectral image.
3. The method for restoring the super-resolution of the hyperspectral images based on the multi-degradation mode data augmentation as claimed in claim 1, wherein the step of performing data augmentation on the test data and the training data in a degradation mode to obtain the test data and the training data with low resolution comprises the following steps:
respectively performing down-sampling on the test data and the training data by adopting bicubic interpolation, nearest neighbor interpolation and bilinear interpolation to obtain a low-resolution image;
wherein, the low resolution image obtained by adopting bicubic interpolation needs to be subjected to Gaussian noise processing again.
4. The method for restoring the super-resolution of the hyperspectral image based on the multi-degradation mode data augmentation as claimed in claim 3, wherein the loss function formula is as follows:
Ltotal=L(δ,γ)+L(μ,ε);
in the formula: therefore, L (delta, gamma) represents the difference between the hyperspectral image in the hyperspectral data set and the hyperspectral image after the hyperspectral image corresponding to the output of the model is subjected to the hyperspectral image; l (mu, epsilon) is expressed as the difference between the pair of low-resolution images in the test data and the training data and the pair of low-resolution images after Lanczos down-sampling; and delta and gamma respectively represent a hyperspectral image and a hyperspectral image after hyper-differentiation in a hyperspectral data set, mu represents a low-resolution image in the test data and the training data, and epsilon represents a low-resolution image after Lanczos down-sampling is carried out by gamma.
5. The method for restoring the super-resolution of the hyperspectral image based on the multi-degradation mode data augmentation as claimed in claim 4, wherein the formula L (x, y) is as follows:
L(x,y)=L1+αLSSTV
in the formula: α is used to balance the loss function, set to 0.001; x is represented as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; l is1Is a function of the mean absolute error; l isSSTVIs a spatial spectrum total variation function.
6. The method for restoring the super-resolution of the hyperspectral image based on the data augmentation of the multi-degradation mode according to claim 5, wherein the spatial spectrum total variation function LSSTVAnd said mean absolute error L1Respectively as follows:
Figure FDA0003352567640000031
Figure FDA0003352567640000032
in the formula: x is represented as a picture in the hyperspectral dataset, the test data and the training data; y represents a hyper-spectral image after the hyper-division and a low-resolution image after the Lanczos down-sampling after the hyper-division; deltahA horizontal gradient of y; deltawA vertical gradient of y; deltacA spectral gradient of y; n is the number of images.
CN202111342185.0A 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation Active CN114266957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111342185.0A CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111342185.0A CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Publications (2)

Publication Number Publication Date
CN114266957A true CN114266957A (en) 2022-04-01
CN114266957B CN114266957B (en) 2024-05-07

Family

ID=80825184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111342185.0A Active CN114266957B (en) 2021-11-12 2021-11-12 Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation

Country Status (1)

Country Link
CN (1) CN114266957B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310959A (en) * 2023-02-21 2023-06-23 南京智蓝芯联信息科技有限公司 Method and system for identifying low-quality camera picture in complex scene
CN117036162A (en) * 2023-06-19 2023-11-10 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image
WO2024027095A1 (en) * 2022-08-03 2024-02-08 湖南大学 Hyperspectral imaging method and system based on double rgb image fusion, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793883A (en) * 2013-12-11 2014-05-14 北京工业大学 Principal component analysis-based imaging spectral image super resolution restoration method
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN111696043A (en) * 2020-06-10 2020-09-22 上海理工大学 Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793883A (en) * 2013-12-11 2014-05-14 北京工业大学 Principal component analysis-based imaging spectral image super resolution restoration method
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN111429349A (en) * 2020-03-23 2020-07-17 西安电子科技大学 Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN111696043A (en) * 2020-06-10 2020-09-22 上海理工大学 Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027095A1 (en) * 2022-08-03 2024-02-08 湖南大学 Hyperspectral imaging method and system based on double rgb image fusion, and medium
CN116310959A (en) * 2023-02-21 2023-06-23 南京智蓝芯联信息科技有限公司 Method and system for identifying low-quality camera picture in complex scene
CN116310959B (en) * 2023-02-21 2023-12-08 南京智蓝芯联信息科技有限公司 Method and system for identifying low-quality camera picture in complex scene
CN117036162A (en) * 2023-06-19 2023-11-10 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN117036162B (en) * 2023-06-19 2024-02-09 河北大学 Residual feature attention fusion method for super-resolution of lightweight chest CT image

Also Published As

Publication number Publication date
CN114266957B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Zhou et al. Pyramid fully convolutional network for hyperspectral and multispectral image fusion
CN111127374B (en) Pan-sharing method based on multi-scale dense network
Xie et al. Hyperspectral image super-resolution using deep feature matrix factorization
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
Chierchia et al. A nonlocal structure tensor-based approach for multicomponent image recovery problems
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN110992275A (en) Refined single image rain removing method based on generation countermeasure network
Xie et al. Deep convolutional networks with residual learning for accurate spectral-spatial denoising
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN109872305B (en) No-reference stereo image quality evaluation method based on quality map generation network
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
Zhang et al. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising
CN113902622B (en) Spectrum super-resolution method based on depth priori joint attention
Liu et al. A super resolution algorithm based on attention mechanism and srgan network
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
Kang et al. Multilayer degradation representation-guided blind super-resolution for remote sensing images
Fan et al. Global sensing and measurements reuse for image compressed sensing
CN111563577A (en) Unet-based intrinsic image decomposition method for skip layer frequency division and multi-scale identification
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN113096015A (en) Image super-resolution reconstruction method based on progressive sensing and ultra-lightweight network
Wu et al. Review of imaging device identification based on machine learning
CN116433548A (en) Hyperspectral and panchromatic image fusion method based on multistage information extraction
CN115082344A (en) Dual-branch network panchromatic sharpening method based on detail injection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant