CN111581879A - Method and system for determining nonlinear abundance of mixed pixels of space artificial target - Google Patents

Method and system for determining nonlinear abundance of mixed pixels of space artificial target Download PDF

Info

Publication number
CN111581879A
CN111581879A CN202010362897.8A CN202010362897A CN111581879A CN 111581879 A CN111581879 A CN 111581879A CN 202010362897 A CN202010362897 A CN 202010362897A CN 111581879 A CN111581879 A CN 111581879A
Authority
CN
China
Prior art keywords
spectrum
abundance
matrix
determining
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010362897.8A
Other languages
Chinese (zh)
Inventor
李庆波
何林倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010362897.8A priority Critical patent/CN111581879A/en
Publication of CN111581879A publication Critical patent/CN111581879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for determining nonlinear abundance of mixed pixels of a space artificial target, which comprises the following steps: constructing a deep neural network, which comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, the decoder is a weighted combination nonlinear hybrid model, a training spectrum matrix is input into the encoder to output an abundance matrix, a reconstructed spectrum is determined by using the weighted combination nonlinear hybrid model according to the abundance matrix, and the weight and the bias in the deep neural network are adjusted by adopting a root mean square error to obtain an updated encoder and a updated decoder; and judging whether the current iteration times are smaller than the preset iteration times, if so, updating the abundance matrix according to the updated encoder, and obtaining the reconstructed spectrum again, and if not, taking the updated encoder as a test model, and determining the abundance value corresponding to each end member by using the test model. By the method and the system, the abundance of the mixed pixels in the linear and nonlinear mixed models can be determined, and the abundance value precision is improved.

Description

Method and system for determining nonlinear abundance of mixed pixels of space artificial target
Technical Field
The invention relates to the technical field of space target identification, in particular to a method and a system for determining nonlinear abundance of mixed pixels of a space artificial target.
Background
With the development of global space resources, a large number of space artificial targets such as satellites and space debris enter the outer space of the earth, the effective identification of the space target has extremely high value, the type attribute of the target can be determined, and early warning can be carried out on potential collision attack and the like.
Large radars and optical telescopes are used for space target monitoring, but when a target is small or located on a middle-high orbit, the radar cannot be used as the target, optical imaging can only see a limited number of pixels, and the provided information cannot judge the material composition of the target. Experiments show that the composition of the target surface material can be determined by a spectrum curve formed by convolution of the space target surface material coefficient and the solar spectrum, the spectrums of different satellite platforms have obvious difference, and the satellites can be classified by utilizing the spectrums, so that the space target identification is carried out by adopting the spectrum technology at present.
Due to the complex diversity of the composition of the surface material of the spatial target, the spectral data obtained by the spectrometer is usually a mixed spectrum of multiple material spectra, rather than a spectrum corresponding to a single substance, and is called a mixed pixel. The unmixing of the mixed pixels is an important part in spatial target identification, and the mixed pixels are used for exploring the component materials and the corresponding proportion thereof by taking the mixed spectrum as a target. The unmixing of the mixed pixel space target is mainly divided into two steps, and the end member extraction is to decompose the mixed pixel into a series of pure substance spectrums (end members), namely to obtain the spectrum data of the space target composition material; the estimation of the abundance is to determine the ratio (abundance) of these pure substances, i.e., the ratio of each material.
The spectrum unmixing technology firstly needs to establish a spectrum mixed model, and the existing spectrum mixed model comprises a linear mixed model and a nonlinear mixed model. The Linear Spectral Mixing Model (LSMM) means that incident light only interacts with the surface of one substance, neglecting the interaction between the objects, and the spectrum obtained by the sensor is a linear superposition of the spectra of each substance. A nonlinear spectral mixing model (NLSMM) considers the scattering effect of photons among various substances, the spectrum obtained by the sensor is no longer a simple linear superposition of the various substances, and a product term is generated to cause nonlinear mixing.
The research of the mass abundance determination method at the present stage is based on the assumption of a linear mixture model, namely the research of the method mainly considers the spectral response signal of a mixture sample, and the spectral response signal is the linear superposition of the spectral response of each end member forming the mixture; and the nonlinear hybrid model is not simple linear superposition any more and has practical physical significance. Therefore, a method for determining the abundance of the nonlinear mixed pixels is needed in the art to overcome the defects in the prior art.
Disclosure of Invention
The invention aims to provide a method and a system for determining the nonlinear abundance of a mixed pixel of a space artificial target, so as to determine the abundance of the mixed pixel in a linear and nonlinear mixed model.
In order to achieve the purpose, the invention provides the following scheme:
a method for determining the nonlinear abundance of a mixed pixel of a spatial artificial target comprises the following steps:
acquiring a training spectrum matrix;
constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model;
inputting the training spectral matrix into the encoder to output an abundance matrix;
determining a reconstructed spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix;
adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder;
judging whether the current iteration times are smaller than the preset iteration times or not, and obtaining a judgment result;
if the judgment result shows that the current iteration times are smaller than the preset iteration times, updating an abundance matrix according to the updated encoder, and returning to the step of determining the reconstruction spectrum of the training spectrum matrix by using a weighted combination nonlinear hybrid model according to the abundance matrix;
if the judgment result shows that the current iteration times are larger than or equal to the preset iteration times, taking the updated encoder as a trained test model;
acquiring spectral data to be processed;
and determining the abundance value corresponding to each end member by adopting a trained test model according to the spectral data to be processed.
Optionally, the acquiring spectral data to be processed specifically includes:
and collecting spectral data of the space target by using the spectral equipment to obtain spectral data to be processed.
Optionally, the determining the reconstructed spectrum of the training spectrum matrix by using a weighted combination nonlinear hybrid model according to the abundance matrix specifically includes:
according to the formula
Figure BDA0002475696400000031
Determining a first reconstructed spectrum;
according to the formula r2=MapreDetermining a second reconstructed spectrum;
according to the formula r3 ═ (Ma)pre) ζ determining a third reconstruction spectrum;
according to the formula r ═ w61r1+w62r2+w63r3Determining a fourth reconstruction spectrum, wherein the fourth reconstruction spectrum is a reconstruction spectrum of a training spectrum matrix;
wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Representing the Hadamard product between the end-members, M representing the spectrum of the end-members, R representing the number of end-members, i-1, 2, …, R-1, j-2,3,…,R,aprethe abundance matrix is represented by a graph of,
Figure BDA0002475696400000032
representing end member MiThe corresponding abundance value of the corresponding one of the first,
Figure BDA0002475696400000033
representing end member MjCorresponding abundance value, ζ denotes the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
Optionally, the adjusting, according to the training spectrum matrix and the reconstructed spectrum, the weight and the offset parameter in the deep neural network by using a root mean square error to obtain an updated encoder and an updated decoder specifically includes:
according to the formula W, b ═ argmin (L2Loss (r, r)0) Adjusting weights and bias parameters in the deep neural network to obtain an updated encoder and an updated decoder;
wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum of the training spectral matrix, r0Representing a training spectral matrix.
A spatial artificial target mixed pixel nonlinear abundance determination system comprises:
the training spectrum matrix acquisition module is used for acquiring a training spectrum matrix;
the deep neural network construction module is used for constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model;
the abundance matrix output module is used for inputting the training spectrum matrix into the encoder and outputting an abundance matrix;
the reconstruction spectrum determining module is used for determining a reconstruction spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix;
the updating module is used for adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder;
the judging module is used for judging whether the current iteration times are smaller than the preset iteration times or not and obtaining a judging result;
the return module is used for updating the abundance matrix according to the updated encoder and returning to the reconstruction spectrum determining module if the judgment result shows that the current iteration times are smaller than the preset iteration times;
the trained test model determining module is used for taking the updated encoder as a trained test model if the judging result shows that the current iteration times are more than or equal to the preset iteration times;
the to-be-processed spectral data acquisition module is used for acquiring to-be-processed spectral data;
and the abundance value determining module is used for determining the abundance value corresponding to each end member by adopting a trained test model according to the spectral data to be processed.
Optionally, the to-be-processed spectral data acquiring module specifically includes:
and the to-be-processed spectral data acquisition unit is used for acquiring spectral data of the space target by using the spectral equipment and acquiring the to-be-processed spectral data.
Optionally, the reconstruction spectrum determination module specifically includes:
a first reconstructed spectrum determination unit for determining a first reconstructed spectrum according to a formula
Figure BDA0002475696400000041
Determining a first reconstructed spectrum;
a second reconstruction spectrum determination unit for determining a second reconstruction spectrum according to the formula r2=MapreDetermining a second reconstructed spectrum;
a third reconstruction spectrum determination unit for determining (Ma) r3pre) ζ determining a third reconstruction spectrum;
a fourth reconstructed spectrum determination unit for determining w according to the formula61r1+w62r2+w63r3Determining a fourth reconstruction spectrum, wherein the fourth reconstruction spectrum is a reconstruction spectrum of a training spectrum matrix;
wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Representing the Hadamard product between the end members, M representing the spectrum of the end members, R representing the number of end members, i-1, 2, …, R-1, j-2, 3, …, R, apre representing the abundance matrix,
Figure BDA0002475696400000051
representing end member MiThe corresponding abundance value of the corresponding one of the first,
Figure BDA0002475696400000052
representing end member MjCorresponding abundance value, ζ denotes the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
Optionally, the update module specifically includes:
an updating unit for updating the r, b ═ argmin (L2Loss (r, r) according to the formula W0) Adjusting weights and bias parameters in the deep neural network to obtain an updated encoder and an updated decoder;
wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum of the training spectral matrix, r0Representing a training spectral matrix.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
1. the invention provides a method and a system for determining the nonlinear abundance of a mixed pixel of a space artificial target, which aim at a nonlinear mixed model.
2. The method effectively overcomes the limitation of linear unmixing by utilizing the weighted combination nonlinear model, and the weighted combination nonlinear model is a weighted combination form of various mixed models and can adapt to more application scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for determining nonlinear abundance of mixed pixels of a space artificial target according to the present invention;
FIG. 2 is a spectrum of three surface materials of the actual measurement space target provided by the present invention;
FIG. 3 is a graph of the spectral reflectance of 1000 hybrid pixels with a signal-to-noise ratio of 50dB according to the present invention;
fig. 4 is a schematic structural diagram of a spatial artificial target mixed pixel nonlinear abundance determination system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a system for determining the nonlinear abundance of a mixed pixel of a space artificial target, so as to determine the abundance of the mixed pixel in a linear and nonlinear mixed model.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
FIG. 1 is a flow chart of a method for determining nonlinear abundance of mixed pixels of a space artificial target provided by the invention. As shown in FIG. 1, the method for determining the nonlinear abundance of the mixed pixels of the space artificial target comprises the following steps:
s101, a training spectrum matrix is obtained, and the training spectrum matrix is a data set used for training.
S102, constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model.
The encoder is expressed as follows:
OL0=r0
OL1=RELU(W1OL0+b1)
OL2=RELU(W2OL1+b2)
OL3=RELU(W3OL2+b3)
OL4=RELU(W4OL3+b4)=apre
wherein r is0Representing a training spectral matrix, i.e. input data, OL0As output results of the input layer, OL1As an output result of the first hidden layer, OL2As an output result of the second hidden layer, OL3As an output result of the third hidden layer, OL4For output of abundance, WtAnd btThe corresponding weight and bias value of different layers are shown, t is 1,2,3,4, RELU (DEG) represents an activation function, and the number of neurons of the L4 layer is the total number of all pure substances in the end-member library.
And S103, inputting the training spectrum matrix into the encoder to output an abundance matrix. Specifically, the parameters in the deep neural network are initialized randomly before the training spectral matrix is put into the training spectral matrix.
And S104, determining the reconstructed spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix.
And adding a decoder layer behind the abundance output layer, namely reconstructing the input original spectrum by the abundance value and the pure substance spectrum in the end member library through a weighted combination nonlinear model by utilizing a weighted combination nonlinear hybrid model.
The weighted combination nonlinear hybrid model comprises two custom layers, wherein the first layer, namely the L5 layer, is a parallel network structure of three hybrid models, and the second layer, namely the L6 layer, is a reconstruction spectrum output layer; the parallel network structure of the three hybrid models comprises a bilinear model, a linear model and a post-nonlinear model, but is not limited to the three models, nor is the parallel network structure limited to only three models, and can also be two models or other multiple models.
In particular, according to bilinear nonlinear models
Figure BDA0002475696400000071
Determining a first reconstructed spectrum; according to a linear model r2=MapreDetermining a second reconstructed spectrum; according to the post-nonlinear model r3 ═ Ma (Ma)pre) ζ determining a third reconstruction spectrum; according to the formula r ═ w61r1+w62r2+w63r3Determining a fourth reconstruction spectrum, wherein the fourth reconstruction spectrum is a reconstruction spectrum of a training spectrum matrix; wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Representing the Hadamard product between end members, M representing the end member spectrum, i.e. the weight W of layer L55R represents the number of end members, i is 1,2, …, R-1, j is 2,3, …, R, apreThe abundance matrix is represented by a graph of,
Figure BDA0002475696400000072
presentation endElement MiThe corresponding abundance value of the corresponding one of the first,
Figure BDA0002475696400000073
representing end member MjThe corresponding abundance value of the corresponding one of the first,
Figure BDA0002475696400000074
ζ represents the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
And S105, adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder. The training goal of the network is to minimize r and r0The difference between the two is defined by the root mean square error, L2Loss, according to the formula W, b being argmin (L2Loss (r, r)0) Each weight W and bias value b are adjusted step by step so that the model Loss is continuously decreased. Wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum, r represents0Representing a training spectral matrix.
And S106, judging whether the current iteration times are smaller than the preset iteration times or not, and obtaining a judgment result. In the embodiment of the invention, the preset iteration number is 5000.
And S107, if the judgment result shows that the current iteration times are smaller than the preset iteration times, updating the abundance matrix according to the updated encoder, and returning to S104.
And S108, if the judgment result shows that the current iteration number is greater than or equal to the preset iteration number, taking the updated encoder as a trained test model, and then deleting the decoder at the rear side.
And S109, acquiring spectral data to be processed.
And collecting spectral data of the space target by using the spectral equipment to obtain spectral data to be processed.
And S110, determining the abundance value corresponding to each end member by adopting a trained test model according to the spectral data to be processed.
In order to make the purpose, technical scheme and advantages of the invention more clear and obvious, the invention carries out simulation experiments.
The invention considers the semi-simulation data acquisition based on the measured space target material, the experiment firstly obtains the reflection spectra of three typical space target surface materials (porous yellow film material, solar panel sample and nonporous yellow film material) by ground detection, and the spectral response information is in a 16-dimensional vector form, as shown in figure 2. In addition, the experiment considers that the average spectrum is calculated as the end member spectrum by carrying out multiple scanning on the same material, so that the pure substance spectrum contains less noise.
Randomly distributing abundance values A ═ a1, a2, ai, …, an } for the three end members, satisfying ai >0, sum (A) ═ 1, and carrying out nonlinear mixing on the different end members according to a weighted combination nonlinear mixing model. To better simulate the practical application environment, the embodiment of the present invention also considers adding 50dB noise interference in the mixed spectrum, as shown in fig. 3.
The method for determining the nonlinear abundance of the mixed pixels of the space artificial target and the conventional method for determining the abundance value are respectively adopted, RMSE is adopted for evaluating the performance between the estimated abundance and the corresponding true value, and the method is given by the following formula:
Figure BDA0002475696400000091
wherein N represents the number of spectral data, SiThe true value of the abundance of the ith spectrum is shown,
Figure BDA0002475696400000092
representing the predicted abundance value of the ith spectrum, the smaller the value of RMSE, the better the performance of the constructed model.
The final experimental results are shown in table 1:
TABLE 1 comparison of spectral data abundance determination (RMSE) for three surface materials of a spatial target
Figure BDA0002475696400000093
DNN-UM is the nonlinear abundance determination method for the spatial artificial target mixed pixel element provided by the invention.
The invention also provides a spatial artificial target mixed pixel nonlinear abundance determination system, as shown in fig. 4, the spatial artificial target mixed pixel nonlinear abundance determination system comprises:
and the training spectrum matrix obtaining module 1 is used for obtaining a training spectrum matrix.
The deep neural network construction module 2 is used for constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model.
And the abundance matrix output module 3 is used for inputting the training spectrum matrix into the encoder and outputting an abundance matrix.
And the reconstruction spectrum determining module 4 is used for determining the reconstruction spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix.
And the updating module 5 is used for adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder.
And the judging module 6 is used for judging whether the current iteration times are smaller than the preset iteration times to obtain a judgment result.
And the returning module 7 is used for updating the abundance matrix according to the updated encoder and returning to the reconstruction spectrum determining module 4 if the judgment result shows that the current iteration times are smaller than the preset iteration times.
And the trained test model determining module 8 is configured to, if the judgment result indicates that the current iteration number is greater than or equal to the preset iteration number, use the updated encoder as the trained test model.
And the to-be-processed spectral data acquisition module 9 is used for acquiring the to-be-processed spectral data.
And the abundance value determining module 10 is configured to determine, according to the spectral data to be processed, an abundance value corresponding to each end member by using a trained test model.
Preferably, the to-be-processed spectral data acquisition module 9 specifically includes:
and the to-be-processed spectral data acquisition unit is used for acquiring spectral data of the space target by using the spectral equipment and acquiring the to-be-processed spectral data.
Preferably, the reconstruction spectrum determination module 4 specifically includes:
a first reconstructed spectrum determination unit for determining a first reconstructed spectrum according to a formula
Figure BDA0002475696400000101
A first reconstructed spectrum is determined.
A second reconstruction spectrum determination unit for determining a second reconstruction spectrum according to the formula r2=MapreA second reconstructed spectrum is determined.
A third reconstruction spectrum determination unit for determining (Ma) r3pre) ζ determines the third reconstructed spectrum.
A fourth reconstructed spectrum determination unit for determining w according to the formula61r1+w62r2+w63r3And determining a fourth reconstructed spectrum, wherein the fourth reconstructed spectrum is a reconstructed spectrum of the training spectrum matrix.
Wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Representing the Hadamard product between the end members, M representing the spectrum of the end members, R representing the number of end members, i-1, 2, …, R-1, j-2, 3, …, R, apre representing the abundance matrix,
Figure BDA0002475696400000102
representing end member MiThe corresponding abundance value of the corresponding one of the first,
Figure BDA0002475696400000103
representing end member MjCorresponding abundance value, ζ denotes the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
Preferably, the update module 5 specifically includes:
an updating unit for updating the r, b ═ argmin (L2Loss (r, r) according to the formula W0) Adjusting weights and bias parameters in the deep neural network to obtain an updated encoder and an updated decoder.
Wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum, r represents0Representing a training spectral matrix.
The invention provides a method and a system for determining the nonlinear abundance of a spatial artificial target mixed pixel aiming at a nonlinear mixed model, which adopt an unsupervised learning method, do not need to train the model by a real label, process the mixed pixel of the nonlinear model, determine the abundance value of each end member, namely the proportion of each material, are more accurate than the abundance obtained by the traditional method, and improve the accuracy of determining the abundance of the mixed pixel.
The invention provides a method for determining nonlinear abundance of a mixed pixel of a space artificial target, which is an unsupervised deep learning method. The composition materials of the space target are relatively fixed and are known, so that an end element database can be set in advance, the mixed spectrum of the space target is unmixed in the end element database to predict the abundance value, a network can be trained directly according to an actually measured data matrix and the abundance value of the network can be predicted without obtaining a label of a training spectrum, a trained test model can be stored to perform nonlinear processing on a new space target, the trained test model has strong capability of restoring original input, and the trained test model has high precision for abundance estimation.
The invention has the advantages that:
1. the invention considers the actual situation that the actually measured spectrum data is the combination of a plurality of mixed models, proposes to adopt a weighted combination nonlinear model to reconstruct the input spectrum, carries out weighted combination on a linear model and two nonlinear models, and optimizes the weight coefficient corresponding to each model through a deep neural network, thus being applicable to the abundance determination scenes of the linear model, the nonlinear model and the more complex mixed models and expanding the application range; the network is trained by using the minimum reconstruction error as a target function, so that the accuracy of abundance determination is improved, and the anti-noise capability is strong.
2. Since the material of the space artificial target is relatively fixed and the end member range is known, the end member database can be set in advance. The abundance determination method is based on the end member database, can determine the abundance under the condition that the end members of the spectrum to be processed are not completely clear and do not exceed the range of the end member spectrum database, and solves the application limit that the number of the end members and the spectrum of the end members must be known firstly in the conventional abundance determination algorithm.
3. The unsupervised learning method is adopted, the model training is not required by a real label, the self-learning can be carried out on the unknown sample, the abundance can be directly determined, and the unsupervised learning method has a better application scene.
4. The weighted combination nonlinear model is not limited to the weighted combination of the three mixed models in the invention, and can be expanded to the weighted combination form of various mixed models, thereby being suitable for more application scenes.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A method for determining the nonlinear abundance of a mixed pixel of a space artificial target is characterized by comprising the following steps of:
acquiring a training spectrum matrix;
constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model;
inputting the training spectral matrix into the encoder to output an abundance matrix;
determining a reconstructed spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix;
adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum of the training spectrum matrix to obtain an updated encoder and an updated decoder;
judging whether the current iteration times are smaller than the preset iteration times or not, and obtaining a judgment result;
if the judgment result shows that the current iteration times are smaller than the preset iteration times, updating an abundance matrix according to the updated encoder, and returning to the step of determining the reconstruction spectrum of the training spectrum matrix by using a weighted combination nonlinear hybrid model according to the abundance matrix;
if the judgment result shows that the current iteration times are larger than or equal to the preset iteration times, taking the updated encoder as a trained test model;
acquiring spectral data to be processed;
and determining the abundance value corresponding to each end member by adopting a trained test model according to the spectral data to be processed.
2. The method for determining the nonlinear abundance of the mixed pixels of the artificial spatial target according to claim 1, wherein the acquiring spectral data to be processed specifically comprises:
and collecting spectral data of the space target by using the spectral equipment to obtain spectral data to be processed.
3. The method for determining the nonlinear abundance of the mixed pixels of the artificial spatial target according to claim 1, wherein the determining the reconstructed spectrum of the training spectrum matrix by using a weighted combination nonlinear mixed model according to the abundance matrix specifically comprises:
according to the formula
Figure FDA0002475696390000021
Determining a first reconstructed spectrum;
according to the formula r2=MapreDetermining a second reconstructed spectrum;
according to the formula r3 ═ (Ma)pre) ζ determining a third reconstruction spectrum;
according to the formula r ═ w61r1+w62r2+w63r3Determining a fourth reconstruction spectrum, wherein the fourth reconstruction spectrum is a reconstruction spectrum of a training spectrum matrix;
wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Denotes the Hadamard product between the end members, M denotes the spectrum of the end members, R denotes the number of end members, i-1, 2, …, R-1, j-2, 3, …, R, apreThe abundance matrix is represented by a graph of,
Figure FDA0002475696390000022
representing end member MiCorrespond toThe value of the abundance of (a),
Figure FDA0002475696390000023
representing end member MjCorresponding abundance value, ζ denotes the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
4. The method for determining the nonlinear abundance of mixed pixels of a spatial artificial target according to claim 1, wherein the method for adjusting the weights and bias parameters in the deep neural network by using root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder specifically comprises:
according to the formula W, b ═ arg min (L2Loss (r, r)0) Adjusting weights and bias parameters in the deep neural network to obtain an updated encoder and an updated decoder;
wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum of the training spectral matrix, r0Representing a training spectral matrix.
5. A spatial artificial target mixed pixel nonlinear abundance determination system is characterized by comprising:
the training spectrum matrix acquisition module is used for acquiring a training spectrum matrix;
the deep neural network construction module is used for constructing a deep neural network; the deep neural network comprises an encoder and a decoder, wherein the encoder comprises an input layer, three hidden layers and an abundance output layer, and the decoder is a weighted combination nonlinear hybrid model;
the abundance matrix output module is used for inputting the training spectrum matrix into the encoder and outputting an abundance matrix;
the reconstruction spectrum determining module is used for determining a reconstruction spectrum of the training spectrum matrix by utilizing a weighted combination nonlinear hybrid model according to the abundance matrix;
the updating module is used for adjusting the weight and the offset parameter in the deep neural network by adopting a root mean square error according to the training spectrum matrix and the reconstructed spectrum to obtain an updated encoder and an updated decoder;
the judging module is used for judging whether the current iteration times are smaller than the preset iteration times or not and obtaining a judging result;
the return module is used for updating the abundance matrix according to the updated encoder and returning to the reconstruction spectrum determining module if the judgment result shows that the current iteration times are smaller than the preset iteration times;
the trained test model determining module is used for taking the updated encoder as a trained test model if the judging result shows that the current iteration times are more than or equal to the preset iteration times;
the to-be-processed spectral data acquisition module is used for acquiring to-be-processed spectral data;
and the abundance value determining module is used for determining the abundance value corresponding to each end member by adopting a trained test model according to the spectral data to be processed.
6. The system for determining the nonlinear abundance of mixed pixels of a spatial artificial target according to claim 5, wherein the module for acquiring spectral data to be processed specifically comprises:
and the to-be-processed spectral data acquisition unit is used for acquiring spectral data of the space target by using the spectral equipment and acquiring the to-be-processed spectral data.
7. The system for determining the nonlinear abundance of mixed pixels of a spatial artificial target according to claim 5, wherein the reconstruction spectrum determination module specifically comprises:
a first reconstructed spectrum determination unit for determining a first reconstructed spectrum according to a formula
Figure FDA0002475696390000031
Determining a first reconstructed spectrum;
a second reconstruction spectrum determination unit for determining a second reconstruction spectrum according to the formula r2=MapreDetermining a second reconstructed spectrum;
a third reconstruction spectrum determination unit for determining (Ma) r3pre) ζ determining a third reconstruction spectrum;
a fourth reconstructed spectrum determination unit for determining w according to the formula61r1+w62r2+w63r3Determining a fourth reconstruction spectrum, wherein the fourth reconstruction spectrum is a reconstruction spectrum of a training spectrum matrix;
wherein r is1Representing a first reconstructed spectrum, r2Representing a second reconstructed spectrum, r3Represents the third reconstructed spectrum, r represents the reconstructed spectrum of the training spectrum matrix, (M)i⊙Mj) Denotes the Hadamard product between the end members, M denotes the spectrum of the end members, R denotes the number of end members, i-1, 2, …, R-1, j-2, 3, …, R, apreThe abundance matrix is represented by a graph of,
Figure FDA0002475696390000041
representing end member MiThe corresponding abundance value of the corresponding one of the first,
Figure FDA0002475696390000042
representing end member MjCorresponding abundance value, ζ denotes the index coefficient, w61Representing the weight, w, occupied by the first reconstructed spectrum62Representing the weight, w, occupied by the second reconstructed spectrum63Represents the weight occupied by the third reconstructed spectrum.
8. The system for determining the nonlinear abundance of mixed pixels of a spatial artificial target according to claim 5, wherein the updating module specifically comprises:
an updating unit for updating the formula W, b ═ arg min (L2Loss (r, r)0) Adjusting weights and bias parameters in the deep neural network to obtain an updated encoder and an updated decoder;
wherein W represents the weight of the deep neural network, b represents the bias of the deep neural network, r represents the reconstructed spectrum of the training spectral matrix, r0Representing a training spectral matrix.
CN202010362897.8A 2020-04-30 2020-04-30 Method and system for determining nonlinear abundance of mixed pixels of space artificial target Pending CN111581879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010362897.8A CN111581879A (en) 2020-04-30 2020-04-30 Method and system for determining nonlinear abundance of mixed pixels of space artificial target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010362897.8A CN111581879A (en) 2020-04-30 2020-04-30 Method and system for determining nonlinear abundance of mixed pixels of space artificial target

Publications (1)

Publication Number Publication Date
CN111581879A true CN111581879A (en) 2020-08-25

Family

ID=72111902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010362897.8A Pending CN111581879A (en) 2020-04-30 2020-04-30 Method and system for determining nonlinear abundance of mixed pixels of space artificial target

Country Status (1)

Country Link
CN (1) CN111581879A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320959A (en) * 2015-09-30 2016-02-10 西安电子科技大学 End member learning based hyperspectral image sparse unmixing method
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN109389106A (en) * 2018-12-20 2019-02-26 中国地质大学(武汉) A kind of high spectrum image solution mixing method and system based on 3D convolutional neural networks
CN111008975A (en) * 2019-12-02 2020-04-14 北京航空航天大学 Mixed pixel unmixing method and system for space artificial target linear model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320959A (en) * 2015-09-30 2016-02-10 西安电子科技大学 End member learning based hyperspectral image sparse unmixing method
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN109389106A (en) * 2018-12-20 2019-02-26 中国地质大学(武汉) A kind of high spectrum image solution mixing method and system based on 3D convolutional neural networks
CN111008975A (en) * 2019-12-02 2020-04-14 北京航空航天大学 Mixed pixel unmixing method and system for space artificial target linear model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈雷等: "基于像元混合模型估计的高光谱图像解混", 《红外技术》 *
韩竹等: "高分五号高光谱图像自编码网络非线性解混", 《遥感学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699838A (en) * 2021-01-13 2021-04-23 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN112699838B (en) * 2021-01-13 2022-06-07 武汉大学 Hyperspectral mixed pixel nonlinear blind decomposition method based on spectral diagnosis characteristic weighting
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network
CN112884062B (en) * 2021-03-11 2024-02-13 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generated countermeasure network

Similar Documents

Publication Publication Date Title
Schiller et al. Neural network for emulation of an inverse model operational derivation of Case II water properties from MERIS data
CN110852227A (en) Hyperspectral image deep learning classification method, device, equipment and storage medium
CN113222316B (en) Variation scenario simulation method based on FLUS model and biodiversity model
CN111581879A (en) Method and system for determining nonlinear abundance of mixed pixels of space artificial target
CN111008975B (en) Mixed pixel unmixing method and system for space artificial target linear model
Stephan et al. Daytime ionosphere retrieval algorithm for the Ionospheric Connection Explorer (ICON)
Okamura et al. Feasibility study of multi-pixel retrieval of optical thickness and droplet effective radius of inhomogeneous clouds using deep learning
Maddy et al. MIIDAPS-AI: An explainable machine-learning algorithm for infrared and microwave remote sensing and data assimilation preprocessing-Application to LEO and GEO sensors
Jamet et al. Use of a neurovariational inversion for retrieving oceanic and atmospheric constituents from ocean color imagery: A feasibility study
Pedersen et al. Empirical modeling of plasma clouds produced by the Metal Oxide Space Clouds experiment
CN106650049B (en) Static rail area array remote sensing camera time-sharing dynamic imaging simulation method
Song et al. Radar data simulation using deep generative networks
CN116797928A (en) SAR target increment classification method based on stability and plasticity of balance model
Bagheri Using deep ensemble forest for high-resolution mapping of PM2. 5 from MODIS MAIAC AOD in Tehran, Iran
CN116609857A (en) Cloud vertical structure parameter estimation method based on visible light, infrared and microwave images
Xu et al. Static and dynamic models of observation toward earth by satellite coverage
CN114862896A (en) Depth model-based visible light-infrared image conversion method
Kuter et al. Estimation of subpixel snow-covered area by nonparametric regression splines
Chuan et al. Computation of atmospheric optical parameters based on deep neural network and PCA
Rochac et al. A spectral feature based CNN long short-term memory approach for classification
Wheeler et al. Satellite propulsion spectral signature detection and analysis through hall effect thruster plume and atmospheric modeling
Muratov et al. Use of AI for Satellite Model Determination from Low Resolution 2D Images
Tordi et al. Simulation of a Layer-Oriented MCAO system
CN117540152A (en) Filling method for complex target flight data based on wavelet transformation
Kerekes Model-based exploration of HSI spaceborne sensor requirements with application performance as the metric

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825