CN111598786A - Hyperspectral image unmixing method based on deep denoising self-coding network - Google Patents

Hyperspectral image unmixing method based on deep denoising self-coding network Download PDF

Info

Publication number
CN111598786A
CN111598786A CN201911188298.2A CN201911188298A CN111598786A CN 111598786 A CN111598786 A CN 111598786A CN 201911188298 A CN201911188298 A CN 201911188298A CN 111598786 A CN111598786 A CN 111598786A
Authority
CN
China
Prior art keywords
network
image data
hyperspectral image
deep
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911188298.2A
Other languages
Chinese (zh)
Other versions
CN111598786B (en
Inventor
孔繁锵
温珂瑶
李丹
周永波
赵瞬民
胡可迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911188298.2A priority Critical patent/CN111598786B/en
Publication of CN111598786A publication Critical patent/CN111598786A/en
Application granted granted Critical
Publication of CN111598786B publication Critical patent/CN111598786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a hyperspectral image unmixing method based on a deep denoising self-coding network. The invention comprises the following steps: designing a deep denoising self-coding network; inputting a group of training image data to a depth denoising self-coding network; extracting main characteristics of original data of the original data through coding, and reconstructing the original data through decoding; continuously training to obtain optimized network parameters, so that the reconstructed data is closer to the original data; and inputting test data after training is finished, solving the abundance coefficient of the hyperspectral image through the hidden layer, obtaining the weight of the last layer of the decoder as the solved end member matrix, and outputting the solved result. According to the invention, on the basis of the traditional denoising encoder, the weights of the hidden layer and the decoding layer are limited to be non-negative, the hidden layer addition sum is a constraint, and the L21 constraint is added into the target function as a regular term, so that the joint sparsity between adjacent pixels is well utilized, and the precision of abundance estimation is improved.

Description

Hyperspectral image unmixing method based on deep denoising self-coding network
Technical Field
The invention belongs to the technical field of image processing and machine learning, and further relates to a hyperspectral image unmixing method based on a deep denoising self-coding network in the technical field of sparse unmixing.
Background
The hyperspectral image is characterized by high spectral resolution, but low spatial resolution. Due to the influence of factors such as atmosphere transmission mixing effect, ground object complexity and low spatial resolution of the hyperspectral imager, a large number of mixed pixels exist in hyperspectral data, the improvement of hyperspectral image processing precision is restricted, and the mixed pixels become a main obstacle for preventing the deep development of a hyperspectral remote sensing technology. Therefore, effective decomposition of the mixed pixels has become an important premise for wide application of hyperspectral images. The mixed image element can be regarded as a group of base vectors which are combined according to a certain proportion. Wherein the basis vector is an end member, and a certain proportion is abundance.
Models for the hyperspectral mixed pixel can be divided into a linear mixed model and a nonlinear mixed model, and because the linear mixed model is simple to model and clear in physical meaning, at present, more linear mixed models are adopted in domestic and foreign researches. Conventional unmixing algorithms include statistics-based and geometry-based hyperspectral unmixing algorithms. With the vigorous development of compressive sensing and sparse representation theory, Iordache et al introduces spectral sparsity into a demixing model, replaces an end member set with a known spectral library, and proposes a sparse demixing algorithm.
At present, sparse solution aliasing algorithms are mainly classified into convex optimization algorithms and greedy algorithms. The convex optimization algorithm mainly comprises a SUnSAL, a CL-SUnSAL, a SUnSAL-TV and a weighting L1 regularization method and the like, and utilizes an L1 norm to depict the sparsity of the abundance coefficient under a certain condition to carry out efficient solution. However, the convex optimization algorithm has a slower solution speed than the greedy algorithm. Greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Matching Pursuit (MP) are mainly based on a single observation vector (SMV) model, similarity between adjacent pixels is not considered when end members are extracted, and local optimization is easy to fall into. A joint sparse unmixing algorithm based on a multi-observation vector (MMV) model, such as joint orthogonal matching pursuit (SOMP), Subspace Matching Pursuit (SMP) and the like, adopts a joint sparse model and combines a blocking strategy to extract end members, can more accurately obtain a global optimal solution compared with algorithms such as OMP, MP and the like, and has the defect that the accuracy of abundance reconstruction is influenced due to the fact that excessive redundant end members exist in an end member set.
Disclosure of Invention
The invention aims to provide a hyperspectral image unmixing method based on a depth denoising self-coding network, so as to improve the sparse unmixing precision of a hyperspectral image.
The technical scheme of the invention is as follows: a hyperspectral image unmixing method based on a depth denoising self-coding network comprises the following steps:
(1) adding a regular term into a network objective function according to the physical characteristics of the abundance coefficient and the two characteristics of a constraint and a nonnegativity constraint on the basis of the denoising self-encoder, thereby forming a deep denoising self-encoding network;
(2) inputting a group of training image data into a deep denoising self-coding network, training the deep denoising self-coding network to obtain optimized network parameters, and obtaining a hyperspectral unmixing network model;
(3) inputting the existing hyperspectral image data to be processed into a deep denoising self-coding network, extracting the characteristics of the hyperspectral image data through the coding process, and then decoding to obtain reconstructed original image data.
Further, in step (1), the deep denoising self-coding network includes an input layer, a hidden layer and an output layer; the method comprises the steps that an encoding process is performed from an input layer to a hidden layer, the encoding process is to perform feature extraction on input hyperspectral image data to obtain a preliminary abundance coefficient, a decoding process is performed from the hidden layer to the output layer, and the decoding process is to decode the obtained preliminary abundance coefficient to obtain reconstructed original image data, namely;
adding a sum of abundance coefficients as a constraint, adding a regular term in a network objective function to reduce redundant end members of a coder, and simultaneously introducing joint sparsity between adjacent pixels; the ReLu is selected as an activation function, so that the nonlinear characteristics of data can be extracted, and the nonnegativity of an abundance coefficient is met; the target function expression of the deep denoising self-coding network is as follows:
Figure BDA0002292945330000021
where X represents the input hyperspectral data, W represents the encoder weights, σ (X) represents the hidden layer activation function,
Figure BDA0002292945330000022
and
Figure BDA0002292945330000023
respectively representing reconstructed image data
Figure BDA0002292945330000024
And the augmentation matrix of decoder weights a:
Figure BDA0002292945330000025
Figure BDA0002292945330000026
represents the operation of squaring F norm, | | ·| non-woven phosphor2,1Representing a 2-norm summation operation of each row vector in the matrix, with λ representing the lagrangian coefficient, and the value of λ set to 10 e-6.
Further, in the step (2), training image data with a size of w × h × c is preprocessed to obtain w × h training sets with a size of 1 × c, and then the training sets are input to the deep denoising self-coding network to perform multiple times of training, so as to obtain optimized network parameters.
Further, before the hyperspectral image data to be processed are input into the deep denoising self-coding network, the deep denoising self-coding network model trained in the step (2) is loaded, and then the network parameters are updated to the network parameters trained in the step (2); then, hyperspectral image data are input, a fullness coefficient is obtained through a network coding process, and an end member matrix is obtained through a decoding process.
Further, the training image data in the step (2) refers to hyperspectral image data serving as a training set, and the hyperspectral image to be processed in the step (3) refers to hyperspectral image data serving as a test set.
The steps of the invention comprise: designing a deep denoising self-coding network, wherein the weight of a hidden layer and a decoding layer is limited to be non-negative on the basis of a traditional denoising coder, the sum is added as a constraint on the basis of a traditional denoising self-coder, the L21 constraint is added as a regular term into a target function, and the joint sparsity between adjacent pixels is utilized; inputting a group of training image data to a depth denoising self-coding network; extracting main characteristics of original data of the data through network coding, and reconstructing the original data through decoding; continuously training to obtain optimized network parameters, so that the reconstructed data is closer to the original data;
and inputting test data after training is finished, solving the abundance coefficient of the hyperspectral image through the hidden layer, obtaining the weight of the last layer of the decoder as the solved end member matrix, and outputting the solved result.
The invention has the following beneficial effects: the method applies the de-noising self-encoder to the hyperspectral unmixing network model by utilizing the characteristic of the de-noising self-encoder, adds the L21 regular term in the target function, reduces redundant rows of the encoder, well utilizes the joint sparsity between adjacent pixels, overcomes the problem of low unmixing precision in the hyperspectral image unmixing process in the prior art, and has the advantage of high hyperspectral unmixing precision.
Drawings
FIG. 1 is a schematic diagram of the architecture of the network model of the present invention;
FIG. 2 is a schematic flow diagram of the network model of the present invention;
FIG. 3 is a schematic diagram of the original abundance image corresponding to 9 end members in the simulation data 1 of the present invention;
FIG. 4 is a schematic of the original abundance image in simulation data 2 of the present invention;
FIG. 5 is a graph illustrating the abundance estimation of end members 1,5,9 of the modeled data 1 under 20dB Gaussian noise in accordance with the present invention;
FIG. 6 is a graph illustrating the abundance estimation of end members 1,3,5 of the modeled data 2 under 20dB Gaussian noise in accordance with the present invention;
FIG. 7 is a schematic diagram of an abundance image of three different elements and a reconstructed abundance image of real data in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the problems involved in the technical solutions of the present invention will be further described below with reference to the accompanying drawings.
A hyperspectral image unmixing method based on a depth denoising self-coding network comprises the following steps:
(1) adding a regular term into a network objective function according to the physical characteristics of the abundance coefficient and the two characteristics of a constraint and a nonnegativity constraint on the basis of the denoising self-encoder, thereby forming a deep denoising self-encoding network;
(2) inputting a group of training image data into a deep denoising self-coding network, training the deep denoising self-coding network to obtain optimized network parameters, and obtaining a hyperspectral unmixing network model;
(3) inputting the existing hyperspectral image data to be processed into a deep denoising self-coding network, extracting the characteristics of the hyperspectral image data through the coding process, and then decoding to obtain reconstructed original image data.
Further, in step (1), the deep denoising self-coding network includes an input layer, a hidden layer and an output layer; the method comprises the steps that an encoding process is performed from an input layer to a hidden layer, the encoding process is to perform feature extraction on input hyperspectral image data to obtain a preliminary abundance coefficient, a decoding process is performed from the hidden layer to the output layer, and the decoding process is to decode the obtained preliminary abundance coefficient to obtain reconstructed original image data, namely;
adding a sum of abundance coefficients as a constraint, adding a regular term in a network objective function to reduce redundant end members of a coder, and simultaneously introducing joint sparsity between adjacent pixels; the ReLu is selected as an activation function, so that the nonlinear characteristics of data can be extracted, and the nonnegativity of an abundance coefficient is met; the target function expression of the deep denoising self-coding network is as follows:
Figure BDA0002292945330000041
where X represents the input hyperspectral data, W represents the encoder weights, σ (X) represents the hidden layer activation function,
Figure BDA0002292945330000042
and
Figure BDA0002292945330000043
respectively representing reconstructed image data
Figure BDA0002292945330000044
And the augmentation matrix of decoder weights a:
Figure BDA0002292945330000045
Figure BDA0002292945330000046
represents the operation of squaring F norm, | | ·| non-woven phosphor2,1Representing a 2-norm summation operation of each row vector in the matrix, with λ representing the lagrangian coefficient, and the value of λ set to 10 e-6.
Further, in the step (2), training image data with a size of w × h × c is preprocessed to obtain w × h training sets with a size of 1 × c, and then the training sets are input to the deep denoising self-coding network to perform multiple times of training, so as to obtain optimized network parameters.
Further, before the hyperspectral image data to be processed are input into the deep denoising self-coding network, the deep denoising self-coding network model trained in the step (2) is loaded, and then the network parameters are updated to the network parameters trained in the step (2); then, hyperspectral image data are input, a fullness coefficient is obtained through a network coding process, and an end member matrix is obtained through a decoding process.
Further, the training image data in the step (2) refers to hyperspectral image data serving as a training set, and the hyperspectral image to be processed in the step (3) refers to hyperspectral image data serving as a test set.
As in fig. 1-2, step S1: on the basis of a traditional denoising self-encoder, the weights of a hidden layer and a decoding layer are limited to be non-negative, the hidden layer is added into a sum to be a constraint, an L21 constraint is added into an objective function as a regular term, and the joint sparsity between adjacent pixels is utilized.
Step S11: the denoising autoencoder is marked as DAE and can be regarded as a three-layer neural network which comprises an input layer, a hidden layer and an output layer; from the input layer to the hidden layer is the encoding process and from the hidden layer to the output layer is the decoding process.
Let encoder be f (x), hidden layer activation function be σ (x), decoder be g (x), then the encoding process is expressed as:
S=f(X)=σ(WX)
where X represents the input hyperspectral data, W represents the encoder weights connecting the input layer and the hidden layer, and S is the hidden layer output (i.e., the abundance coefficient).
The decoding process is represented as:
Figure BDA0002292945330000047
where a denotes the decoder weights (i.e. end-members) connecting the hidden and output layers,
Figure BDA0002292945330000051
representing the reconstructed data.
In the unmixing problem, the decoder weight A and the hidden layer S respectively correspond to an end member matrix and an abundance coefficient; the network learns the weight and hidden representation of the reconstructed data by minimizing the average reconstruction error:
Figure BDA0002292945330000052
where ∑ denotes a summation operation, i denotes the number of columns of vectors in the hyperspectral data, i ranges from an integer of 1 to n,
Figure BDA0002292945330000053
the square operation of 2 norms is taken; no bias is used in the network design because the bias tends to be a large negative value during training.
Step S12: a deep denoising autoencoder, which is marked as DDAE, and a network model is shown in FIG. 1; for the unmixing problem, the sum of abundance coefficients is an important constraint, and the reconstructed data is subjected to the constraint condition
Figure BDA0002292945330000054
The sum weight A is augmented by a constant number, and the augmentation matrices are respectively used
Figure BDA0002292945330000055
And
Figure BDA0002292945330000056
represents:
Figure BDA0002292945330000057
according to a decoding process
Figure BDA0002292945330000058
Can obtain the product
Figure BDA0002292945330000059
The column vector of the abundance matrix satisfies a constraint of sum.
In practical application, data always has noise, and the existence of the noise and the estimation error of the number of end members can cause the de-mixing performance to be sharply reduced; to solve this problem, a regularization term | | W is introducedT||2,1To reduce redundant rows of the encoder, but
This does not reflect the joint sparsity between adjacent pixels, so it is improved to | | | σ (WX))||2,1Therefore, redundant end members are reduced, combined sparsity is introduced, and the performance of abundance estimation is improved; the objective function of W is defined as:
Figure BDA00022929453300000510
wherein
Figure BDA00022929453300000511
Represents the operation of squaring F norm, | | ·| non-woven phosphor 2,12 norm addition operation of each row vector in the matrix is taken, wherein lambda represents a Lagrange coefficient, and the value of lambda is set to be 10 e-6; according to the requirement of linear unmixing, the decoding function should be a linear mapping function, i.e.
Figure BDA00022929453300000512
Therefore, to achieve an optimal solution, the encoding function should also be linear; meanwhile, the abundance coefficient is required to meet the nonnegativity in hyperspectral unmixing, so the activation function also ensures that the hidden layer (abundance S) is nonnegative; thus ReLu is chosen as the activation function, i.e., σ (x) ═ max (x, 0); when S ≧ 0, the encoder behaves as a linear mapping function.
According to the unmixing requirement, the decoder weight also meets the nonnegativity, namely A is more than or equal to 0; we solve this problem by the ReLu function, which guarantees the non-negativity of a in the optimization process.
Step S2: inputting a group of training image data to a depth denoising self-coding network;
inputting training data X, initializing encoder weight W and decoder weight A, and performing end member estimation by SMP algorithm to obtain A0And corresponding abundance coefficient S0(ii) a Then the encoder weight is initialized to W ═ S0X-1Decoder weight initialization is that A is equal to A0
Step S3: extracting main characteristics of original data of the original data through coding, and reconstructing the original data through decoding;
encoding the training data to obtain S ═ f (x) ═ σ (WX), where S is the extracted S ═ f (x) ═ σ (WX)Characteristic; decoding S to obtain
Figure BDA0002292945330000061
Figure BDA0002292945330000062
I.e. the reconstructed raw data.
Step S4: continuously training to obtain optimized network parameters, so that the reconstructed data is closer to the original data;
setting the training times as 100 times and the training speed as 10 e-4; during the training process, an optimizer (RMSPropOptimizer) is used to continuously optimize network parameters, so that the obtained reconstructed data gradually approaches the original data.
Step S5: and inputting test data after training is finished, solving S through a coding process, wherein the obtained S is the abundance coefficient of the hyperspectral image, the weight A of the decoder is the solved end member matrix, and finally outputting the solved result S.
The effect of the present invention will be further described with reference to simulation experiments.
1. Simulation conditions are as follows:
the simulation is carried out on a system with a CPU of Intel (R) core (TM) i7-6700HQ 2.60GHz, a memory of 8GB and Windows 10.
2. Simulation content:
in the simulation data experiment, the United States Geological Survey (USGS) spectral library splib06 is adopted, and comprises spectral curves of 498 different substances under 224 spectral bands; the simulation data 1 is generated by randomly selecting 9 end members in a spectral library, and comprises 100 × 100 pixels and 224 wave bands, and fig. 3(a) - (i) respectively show original abundance images corresponding to the 9 end members; the SNR of the added zero-mean white Gaussian noise is 10, 20 and 30dB respectively.
The simulation data 2 is generated by randomly selected 5 end members in a spectrum library based on a hyperspectral linear mixed model, and has 75 multiplied by 75 pixels and 224 wave bands; FIG. 4(a) shows a simulated hyperspectral image, (b) - (f) show abundance images of five end members, respectively; the SNR of the added zero-mean white Gaussian noise is 10, 20 and 30dB respectively.
The data used for the real image is AVIRIS custom data, and the image sub-block size is 250 × 191 pixels, wherein each pixel comprises 188 spectral bands (spectral bands with atmospheric moisture absorption and low signal-to-noise ratio removed); figure 7 is an abundance distribution map generated by tricorder3.3 software and reconstructed abundance images of various techniques.
Compared with the SUnSAL, SUnSAL-TV and SMP in the prior art, the method provided by the invention has the advantages that the method is simple and convenient to operate and has high efficiency; the parameters of each technology are adjusted to be optimal in experiments, and the performance of the invention is fully verified.
The unmixing precision measurement index of the simulation data experiment uses a Signal Reconstruction Error (SRE), and the calculation formula is as follows:
Figure RE-GDA0002464598200000062
wherein S represents the original abundance matrix,
Figure RE-GDA0002464598200000063
represents the reconstructed abundance matrix expressed in dB as SRE (dB) ═ 10log10(SRE)。
For the SRE index, a larger value indicates a smaller error between the abundance estimate and the true abundance value, and the better the abundance reconstruction performance.
The real image experiment adopts the sparsity of an abundance image and the root mean square error of a reconstructed image to evaluate the performance of the algorithm; sparsity is defined as: the number of nonzero values in the abundance matrix of the hyperspectral image; abundance values greater than 0.001 are defined as non-zero abundances to avoid calculating negligible values.
Root Mean Square Error (RMSE) is defined as follows:
Figure BDA0002292945330000072
table 1 results of unmixing of analog data 1 and analog data 2 with different noise.
Figure BDA0002292945330000073
As can be seen from table 1, in most cases, the unmixing accuracy of the present invention is highest in all the technologies, and is the best when the SNR is 10dB, which indicates that the present invention has very good denoising performance.
Fig. 5 and 6 show abundance images obtained by unmixing the simulation data 1 and the simulation data 2 under 20dB gaussian noise, respectively; as can be seen from FIG. 5, the three comparison techniques have more noise points, while the three comparison techniques have fewer noise points, have higher similarity with the original abundance image and have better visual effect; in fig. 6, although SUnSAL-TV has almost no noise point, the image is too smooth and lacks part of feature information; compared with SMP and SUnSAL, the invention has fewer noise points and better visual effect.
Table 2 abundance image sparsity and hyperspectral image reconstruction error:
technique of SMP SUnSAL SUnSAL-TV The invention
Degree of sparseness 15.102 17.5629 20.472 10.0954
Reconstruction error 0.0034 0.0051 0.0038 0.0018
As can be seen from Table 2, the sparsity and the reconstruction error of the invention are both minimum and far superior to other algorithms, which shows that the invention has higher unmixing performance on real hyperspectral images.
It can be seen from fig. 7 that the reconstructed abundance image of the present invention has fewer noise points, and retains the edge and feature information of the abundance image, closer to the abundance distribution map generated by software.

Claims (5)

1. A hyperspectral image unmixing method based on a depth denoising self-coding network is characterized by comprising the following steps:
(1) adding a regular term into a network objective function according to the physical characteristics of the abundance coefficient and the two characteristics of a constraint and a nonnegativity constraint on the basis of the denoising self-encoder, thereby forming a deep denoising self-encoding network;
(2) inputting a group of training image data into a deep denoising self-coding network, training the deep denoising self-coding network to obtain optimized network parameters, and obtaining a hyperspectral unmixing network model;
(3) inputting the existing hyperspectral image data to be processed into a deep denoising self-coding network, extracting the characteristics of the hyperspectral image data through the coding process, and then decoding to obtain reconstructed original image data.
2. The hyperspectral image unmixing method based on the deep denoising self-coding network according to claim 1, wherein:
in the step (1), the deep denoising self-coding network comprises an input layer, a hidden layer and an output layer; the method comprises the steps that an encoding process is performed from an input layer to a hidden layer, the encoding process is to perform feature extraction on input hyperspectral image data to obtain a preliminary abundance coefficient, a decoding process is performed from the hidden layer to the output layer, and the decoding process is to decode the obtained preliminary abundance coefficient to obtain reconstructed original image data, namely;
adding a sum of abundance coefficients as a constraint, adding a regular term in a network objective function to reduce redundant end members of a coder, and simultaneously introducing joint sparsity between adjacent pixels; the ReLu is selected as an activation function, so that the nonlinear characteristics of data can be extracted, and the nonnegativity of an abundance coefficient is met; the target function expression of the deep denoising self-coding network is as follows:
Figure FDA0002292945320000011
where X represents the input hyperspectral data, W represents the encoder weights, σ (X) represents the hidden layer activation function,
Figure FDA0002292945320000012
and
Figure FDA0002292945320000013
respectively representing reconstructed image data
Figure FDA0002292945320000014
And the augmentation matrix of decoder weights a:
Figure FDA0002292945320000015
Figure FDA0002292945320000016
represents the operation of squaring F norm, | | ·| non-woven phosphor2,1Representing a 2-norm summation operation of each row vector in the matrix, with λ representing the lagrangian coefficient, and the value of λ set to 10 e-6.
3. The hyperspectral image unmixing method based on the deep de-noising self-coding network as claimed in claim 1, wherein: in the step (2), training image data with the size of w × h × c is preprocessed to obtain w × h training sets with the size of 1 × c, and then the training sets are input to the deep denoising self-coding network to be trained for multiple times, so that optimized network parameters are obtained.
4. The hyperspectral image unmixing method based on the deep de-noising self-coding network as claimed in claim 1, wherein: before the hyperspectral image data to be processed are input into the deep denoising self-coding network, the deep denoising self-coding network model trained in the step (2) is loaded; updating the network parameters into the network parameters trained in the step (2); then, hyperspectral image data are input, abundance coefficients are obtained through a network coding process, and an end member matrix is obtained through a decoding process.
5. The hyperspectral image unmixing method based on the deep de-noising self-coding network as claimed in claim 1, wherein: the training image data in the step (2) is hyperspectral image data serving as a training set, and the hyperspectral image to be processed in the step (3) is hyperspectral image data serving as a test set.
CN201911188298.2A 2019-11-28 2019-11-28 Hyperspectral image unmixing method based on depth denoising self-coding network Active CN111598786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911188298.2A CN111598786B (en) 2019-11-28 2019-11-28 Hyperspectral image unmixing method based on depth denoising self-coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911188298.2A CN111598786B (en) 2019-11-28 2019-11-28 Hyperspectral image unmixing method based on depth denoising self-coding network

Publications (2)

Publication Number Publication Date
CN111598786A true CN111598786A (en) 2020-08-28
CN111598786B CN111598786B (en) 2023-10-03

Family

ID=72183265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911188298.2A Active CN111598786B (en) 2019-11-28 2019-11-28 Hyperspectral image unmixing method based on depth denoising self-coding network

Country Status (1)

Country Link
CN (1) CN111598786B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN113049530A (en) * 2021-03-17 2021-06-29 北京工商大学 Single-seed corn seed moisture content detection method based on near-infrared hyperspectrum
CN113486869A (en) * 2021-09-07 2021-10-08 中国自然资源航空物探遥感中心 Method, device and medium for lithology identification based on unsupervised feature extraction
CN113804657A (en) * 2021-09-03 2021-12-17 中国科学院沈阳自动化研究所 Sparse self-encoder spectral feature dimension reduction method based on multiple regression combination
CN116091832A (en) * 2023-02-16 2023-05-09 哈尔滨工业大学 Tumor cell slice hyperspectral image classification method based on self-encoder network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365820A1 (en) * 2017-06-19 2018-12-20 ImpactVision, Inc. System and method for hyperspectral image processing to identify object
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN109978162A (en) * 2017-12-28 2019-07-05 核工业北京地质研究院 A kind of mineral content spectra inversion method based on deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365820A1 (en) * 2017-06-19 2018-12-20 ImpactVision, Inc. System and method for hyperspectral image processing to identify object
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence
CN109978162A (en) * 2017-12-28 2019-07-05 核工业北京地质研究院 A kind of mineral content spectra inversion method based on deep neural network
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢会欣: "基于非负自编码器及非负矩阵分解的高光谱解混" *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN112699838B (en) * 2021-01-13 2022-06-07 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN113049530A (en) * 2021-03-17 2021-06-29 北京工商大学 Single-seed corn seed moisture content detection method based on near-infrared hyperspectrum
CN113804657A (en) * 2021-09-03 2021-12-17 中国科学院沈阳自动化研究所 Sparse self-encoder spectral feature dimension reduction method based on multiple regression combination
CN113486869A (en) * 2021-09-07 2021-10-08 中国自然资源航空物探遥感中心 Method, device and medium for lithology identification based on unsupervised feature extraction
CN116091832A (en) * 2023-02-16 2023-05-09 哈尔滨工业大学 Tumor cell slice hyperspectral image classification method based on self-encoder network
CN116091832B (en) * 2023-02-16 2023-10-20 哈尔滨工业大学 Tumor cell slice hyperspectral image classification method based on self-encoder network

Also Published As

Publication number Publication date
CN111598786B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN111598786B (en) Hyperspectral image unmixing method based on depth denoising self-coding network
Xu et al. External prior guided internal prior learning for real-world noisy image denoising
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN105163121B (en) Big compression ratio satellite remote sensing images compression method based on depth autoencoder network
CN109671029B (en) Image denoising method based on gamma norm minimization
CN110110596B (en) Hyperspectral image feature extraction, classification model construction and classification method
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN104463223B (en) Hyperspectral image group sparse unmixing method based on space spectrum information abundance constraint
CN106408530A (en) Sparse and low-rank matrix approximation-based hyperspectral image restoration method
CN113177882A (en) Single-frame image super-resolution processing method based on diffusion model
CN106972862B (en) Group sparse compressed sensing image reconstruction method based on truncation kernel norm minimization
CN102542542A (en) Image denoising method based on non-local sparse model
CN106097278A (en) The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method
CN113298734B (en) Image restoration method and system based on mixed hole convolution
CN105184742B (en) A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector
CN111915518B (en) Hyperspectral image denoising method based on triple low-rank model
CN104200436A (en) Multispectral image reconstruction method based on dual-tree complex wavelet transformation
CN111147863B (en) Tensor-based video snapshot compression imaging recovery method
CN106447632A (en) RAW image denoising method based on sparse representation
CN112967210A (en) Unmanned aerial vehicle image denoising method based on full convolution twin network
CN107292855B (en) Image denoising method combining self-adaptive non-local sample and low rank
CN113902622A (en) Spectrum super-resolution method based on depth prior combined attention
Wen et al. Learning flipping and rotation invariant sparsifying transforms
CN113436101A (en) Method for removing rain of Longge Kuta module based on efficient channel attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant