CN110363712B - Sparse dual constraint hyperspectral image unmixing method - Google Patents

Sparse dual constraint hyperspectral image unmixing method Download PDF

Info

Publication number
CN110363712B
CN110363712B CN201910514472.1A CN201910514472A CN110363712B CN 110363712 B CN110363712 B CN 110363712B CN 201910514472 A CN201910514472 A CN 201910514472A CN 110363712 B CN110363712 B CN 110363712B
Authority
CN
China
Prior art keywords
matrix
hyperspectral image
sparse
layer
abundance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910514472.1A
Other languages
Chinese (zh)
Other versions
CN110363712A (en
Inventor
舒振球
孙燕武
陆翼
范洪辉
由从哲
张�杰
郁钱
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Technology
Original Assignee
Jiangsu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Technology filed Critical Jiangsu University of Technology
Priority to CN201910514472.1A priority Critical patent/CN110363712B/en
Publication of CN110363712A publication Critical patent/CN110363712A/en
Application granted granted Critical
Publication of CN110363712B publication Critical patent/CN110363712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a hyperspectral image unmixing method with sparse dual constraint, which comprises the following steps: s10, obtaining a hyperspectral image to be unmixed, and constructing a nearest neighbor graph by searching k nearest neighbors of each pixel point in a high-dimensional data matrix; s20, taking each pixel point in the nearest neighbor graph constructed in the step S10 as a vector, and constructing another graph in a sparse area by a method of setting sparse distances among the pixel points; s30, according to the two graphs constructed in the step S10 and the step S20, establishing a loss function C based on nonnegative matrix factorization under sparse constraint: s40, decomposing the data matrix N layer by layer to obtain corresponding matrix factors, adjusting the matrix factors by using a fine adjustment rule after decomposing to the last layer, and carrying out iterative updating after adjustment; s50, outputting a final iteration result, obtaining an end member matrix U and an abundance matrix V after unmixing the hyperspectral image, and completing unmixing of the hyperspectral image. The whole speed is high, the occupied storage space is small, the complexity is low, a certain processing speed is maintained, and the mixing knowing efficiency is improved.

Description

Sparse dual constraint hyperspectral image unmixing method
Technical Field
The invention relates to the technical field of image processing, in particular to a hyperspectral image unmixing method.
Background
The hyperspectral remote sensing technology combines a spectrum technology and an imaging technology to acquire multidimensional information and detect one-dimensional spectrum information and two-dimensional collection space of a target at the same time, so that continuous and narrow-band image data with hyperspectral resolution are obtained. In general, the resolution of hyperspectral images is lower, mixed pixels are commonly present in the images, and the processing process is more difficult than that of pure pixels, but has more practical significance. The hyperspectral unmixing technology uses the mixed pixels as the center, and solves the proportion of the mixed components distributed in the mixed pixels by a relatively accurate classification technology, namely solves the abundance. During the unmixing process, there are two modes of mixing: a linear mixed mode and a nonlinear mixed mode, wherein the linear mixed mode treats the pixel spectrum as a linear combination of end member features weighted by their abundance ratios; nonlinear hybrid models are highly complex and inefficient to process, and are generally not considered.
In recent years, non-negative matrix factorization based on a linear hybrid model has been attracting attention as an effective tool for hyperspectral unmixing, which takes unmixing as a blind source separation problem, and decomposes image data into two matrices, namely an end member matrix and an abundance matrix. In the hyperspectral image, each pixel is regarded as a mixture of several end members, and given that the number of end members in the image is K and the corresponding wavelength index band is a, any one pixel N in the hyperspectral image N is a vector of a row 1 column. Let U 1 An end member matrix (u 1 ,...,u j ,...,u K ) Wherein u is j For a row 1 column vector representing the spectral characteristics of the j-th end member, then pixel n can be considered approximately as a linear combination of end members, as in equation (1):
n=U 1 v+e (1)
wherein v is a vector of K rows and 1 columns, representing the abundance of the end members; e is the additional gaussian white noise. Let B be the number of pixels in the hyperspectral image N, E be the matrix of noise added a rows and B columns, then the hyperspectral image N can be expressed as equation (2):
N=U 1 V+E (2)
the abundance constraint sum can be expressed as formula (3):
Figure BDA0002094564320000011
where V is an abundance matrix of K rows and B columns, i=1, 2.
The purpose of hyperspectral unmixing is to estimate the end member matrix U for a given hyperspectral image N 1 And an abundance matrix V, the end member matrix U because the end-constructed spectral response and its proportion on each pixel cannot be less than 0 1 And the abundance matrix V are non-negative. Although the non-negative matrix factorization can decompose the matrix of the hyperspectral image into the form of the product of two non-negative matrices, so that the matrix can become a natural solution of the hyperspectral image, the non-convexity of the loss function in the non-negative matrix factorization can lead to the problem that a unique solution cannot be obtained in the decomposition process and is easy to fall into local minima.
Disclosure of Invention
Aiming at the problems, the invention provides a sparse dual constraint hyperspectral image unmixing method, which effectively solves the technical problem of high complexity in the hyperspectral image unmixing in the prior art.
The technical scheme provided by the invention is as follows:
a sparse dual constrained hyperspectral image unmixing method, comprising:
s10, obtaining a hyperspectral image to be unmixed, and constructing a nearest neighbor graph by searching k nearest neighbors of each pixel point in a high-dimensional data matrix;
s20, taking each pixel point in the nearest neighbor graph constructed in the step S10 as a vector, and constructing another graph in a sparse area by a method of setting sparse distances among the pixel points;
s30, according to the two graphs constructed in the step S10 and the step S20, establishing a loss function C based on nonnegative matrix factorization under sparse constraint:
Figure BDA0002094564320000021
wherein N represents a data matrix corresponding to the hyperspectral image, U represents an end member matrix, V represents an abundance matrix, and mu represents a measure of V 1/2 Coherence coefficient, L, of contribution to loss function C 1 And L 2 Laplacian matrixes corresponding to the two graphs, wherein gamma represents a graph constraint coefficient for measuring graph constraint, tr (·) represents a trace of the matrix, and beta represents a balance factor of a characteristic space and a sparse region;
s40, decomposing the data matrix N layer by layer to obtain corresponding matrix factors, adjusting the matrix factors by using a fine adjustment rule after decomposing to the last layer, and carrying out iterative updating after adjustment;
s50, outputting a final iteration result, obtaining an end member matrix U and an abundance matrix V after unmixing the hyperspectral image, and completing unmixing of the hyperspectral image.
Further preferably, in step S40, the fine tuning rule for the matrix factor in each layer decomposition is:
Figure BDA0002094564320000031
Figure BDA0002094564320000032
wherein, the liquid crystal display device comprises a liquid crystal display device, s representing the number of layers of the current non-negative matrix factorization; psi phi type s-1 Represents the result of the s-1 layer matrix decomposition, and ψ s-1 =U 1 U 2 ...U s-1 ,U 1 、U 2 、...、U s-1 、U s Representing the end member matrix resulting from the decomposition of the corresponding number of layers,
Figure BDA0002094564320000033
χ 1 represents a balance constraint parameter on the abundance matrix, +.>
Figure BDA0002094564320000034
Representing reconstruction of the s-th layer of the abundance matrix V s Representing the abundance matrix obtained by decomposing the s layer, < >>
Figure BDA0002094564320000035
Representing the conjugate matrix of the data matrix N.
Further preferably, in step S40, the iterative updating rule of the end member matrix U and the abundance matrix V is:
U←U.*NV T ./UVV T
Figure BDA0002094564320000036
wherein W is r Representing the weight matrix of the graph, D r Representing a weight matrix W r Is a diagonal matrix of (a), and
Figure BDA0002094564320000037
r=1 or 2, representing the code numbers corresponding to the two figures; i represents the ith row of the weight matrix and j represents the jth column of the weight matrix.
In the sparse dual constraint hyperspectral image unmixing method provided by the invention, the nonnegative matrix factorization method is optimized, the sparse constraint is added, meanwhile, the double graphs are constructed, a new loss function is established, and the iteration method is used for solving to complete the unmixing of hyperspectral images, so that the whole solving process is simpler, the speed is high, the occupied storage space is small, the condition that the unique solution cannot be obtained in the factorization process due to the non-convexity of the loss function in the existing nonnegative matrix factorization is avoided, the complexity is low, meanwhile, a certain processing speed can be maintained, the unmixing efficiency is greatly improved, and the method is suitable for a large amount of data. In addition, depth non-negative matrix factorization is used, potential information among data is mined to a certain extent, the data is fully processed, and the utilization rate is high.
Drawings
The above features, technical features, advantages and implementation thereof will be further described in the following detailed description of preferred embodiments with reference to the accompanying drawings in a clearly understandable manner.
Fig. 1 is a schematic flow chart of a sparse dual constraint hyperspectral image unmixing method in the invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
Aiming at the technical problem that the hyperspectral image is easy to be in local minimum difficult problem when being decomposed in the prior art, the invention provides a brand-new sparse dual constraint hyperspectral image unmixing method, as shown in figure 1, which comprises the following steps:
s10, obtaining a hyperspectral image to be unmixed, and constructing a nearest neighbor graph by searching k nearest neighbors of each pixel point in a high-dimensional data matrix;
s20, taking each pixel point in the nearest neighbor graph constructed in the step S10 as a vector, and constructing another graph in a sparse area by a method of setting sparse distances among the pixel points;
s30, according to the two graphs constructed in the step S10 and the step S20, establishing a loss function C based on non-negative matrix factorization under sparse constraint, wherein the loss function C is as shown in the formula (4):
Figure BDA0002094564320000041
wherein N represents a data matrix corresponding to the hyperspectral image, U represents an end member matrix, V represents an abundance matrix, and mu represents a measure of V 1/2 Coherence coefficient, L, of contribution to loss function C 1 And L 2 The Laplace matrixes corresponding to the two graphs, wherein gamma represents a graph constraint coefficient (a scalar quantity) for measuring the constraint of the graphs, tr (·) represents the trace of the matrix, and beta represents a balance factor of the characteristic space and the sparse area;
s40, decomposing the data matrix N layer by layer to obtain corresponding matrix factors, adjusting the matrix factors by using a fine adjustment rule after decomposing to the last layer, and carrying out iterative updating after adjustment;
s50, outputting a final iteration result, obtaining an end member matrix U and an abundance matrix V after unmixing the hyperspectral image, and completing unmixing of the hyperspectral image.
The hyperspectral image unmixing method is established on depth non-negative matrix factorization under sparse constraint, and the depth non-negative matrix factorization is explained as follows:
the non-negative matrix factorization of the most basic form is just a single-layer learning process, and canCapable of learning end member matrix U simultaneously 1 And an abundance matrix, and stacking a hidden layer on the bottom of each layer to decompose the abundance matrix V obtained on the single layer into a new end member matrix U 2 And abundance matrix V 2 Thus, by expanding the shallow non-negative matrix factorization, a two-layer non-negative matrix factorization is obtained. In each subsequent decomposition, further decomposing the abundance matrix obtained by the previous decomposition, and finally decomposing the data matrix N into S+1 factors, namely: n=u 1 U 2 ...U S V S The formula (5) holds for different layers:
Figure BDA0002094564320000051
wherein S represents the total number of layers of decomposition, U 1 、U 2 、...、U S Representing the end member matrix obtained by decomposing each layer, V 1 、V 2 、...、V S Representing the abundance matrix obtained by decomposing each layer.
When decomposing to the last layer, in order to reduce the total reconstruction error, the loss function C according to equation (6) when decomposing each layer deep Trimming all matrix factors:
Figure BDA0002094564320000052
finally, an optimized matrix factor is obtained, and the opposite end member matrix U and the abundance matrix V are calculated according to the formula (7) and the formula (8):
U=U 1 U 2 ...U S (7)
V=V S (8)
however, in the above decomposition process, the product of matrix factors is not as good as possible to approximate the data matrix N, and the loss function C in equation (6) deep With non-convexity, the final result is not stable. Therefore, in the invention, sparse constraint is added to optimize the model, and a sparse constraint depth non-negative matrix factorization model with total variation is establishedType (2).
Before the next matrix decomposition, the following updates are made to each layer according to equations (9) and (10):
U=U.*(UVT)./(UVV T ) (9)
Figure BDA0002094564320000053
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002094564320000054
is the conjugate matrix of the end member matrix U.
In this fine tuning stage, the update rule is as in formula (11) and formula (12):
ψ s-1 =U 1 U 2 ...U s-1 (11)
Figure BDA0002094564320000061
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002094564320000062
is the reconstruction of the s-th layer of the abundance matrix; psi phi type s-1 Represents the result of the s-1 layer matrix decomposition, and when s=1, ψ s-1 Is an identity matrix. Based on this, the loss function C in the formula (6) deep Can be rewritten as formula (13):
Figure BDA0002094564320000063
in practical application, by selecting a proper step length, a fine tuning rule for the s-th layer in the adjustment process is obtained, as shown in formula (14). The step size is set in advance, for example, to about 0.1, according to the actual situation.
Figure BDA0002094564320000064
Wherein, the liquid crystal display device comprises a liquid crystal display device, s representing the number of layers of the current non-negative matrix factorization;
Figure BDA0002094564320000065
χ 1 representing balance constraint parameters of the abundance matrix as positive numbers; />
Figure BDA0002094564320000066
Representing the conjugate matrix of the data matrix N.
As demonstrated by Qian et al, L for non-negative matrix factorization 1/2 Is the best choice for measuring sparsity and computational complexity. So in the present invention the loss function C deep Addition of L 1/2 Constraint, obtaining a loss function C as in equation (15) deep I.e. a deep non-negative matrix factorization model under sparse constraint with total variation:
Figure BDA0002094564320000067
wherein the loss function C deep The result is an optimization of the loss function C, μ is a measuring device for measuring V 1/2 Coefficients that constrain the degree of contribution to the loss function.
As A > K, the abundance vector can be viewed as a projection of the data matrix N onto a K-dimensional space, where A represents the dimension of the data matrix N, K representing the dimensions of the sparse region. According to manifold learning theory, considering that hyperspectral image is high-dimensional data, the distance considering the manifold of data should be added properly during processing, so as to obtain better unmixing effect. In order to maintain the manifold structure of the high-dimensional space in the low-micro space, data points are taken as vertexes, nearest neighbor graphs are constructed by using high-order data, and a graph regularizer is combined into a model of non-negative matrix factorization so as to ensure consistency of the internal structure in the characteristic space and the abundance matrix.
The construction of the graph starts with finding the k nearest neighbors of each pixel in the high-dimensional data to obtain a nearest neighbor graph (corresponding to step S10), assuming that the nearest neighbor graph has a code number of 1, when pixel n j Is pixel n i When a k nearest neighbor point of (2), its weight W 1ij Assignment as (16):
Figure BDA0002094564320000071
however, using manifold information alone does not adequately describe the nature of the data. Consider that adjacent pixels may have similar spectral responses and have similar unmixed effects. Thus, a second graph is created in a similar way (corresponding to step S20). In this spatial map, vertices are pixels, and nearest neighbors can be found according to the spatial distance between pixels, specifically in a sparse region, by setting the sparse distance between pixels. If the code of the nearest neighbor graph is 1, the code of the graph is 2.
After constructing both graphs, the original loss function can be rewritten as equation (4), and the optimization results in equation (17):
Figure BDA0002094564320000072
in order to obtain the optimal values of the end member matrix U and the abundance matrix V, the end member matrix U and the abundance matrix V are respectively biased according to the formula (4) to obtain the formula (18):
Figure BDA0002094564320000073
wherein, ψ is the miscellaneous item of the bias derivative of the opposite end member matrix U,
Figure BDA0002094564320000077
for solving the bias-leading miscellaneous items of the abundance matrix V, according to the Karush-Kuhn-Tucker condition, the miscellaneous items ψ are always multiplied by the end member matrix U, and the product is 0; miscellaneous items->
Figure BDA0002094564320000078
And (3) always performing product operation with the abundance matrix V, wherein the product is 0, so that the formula (19) is obtained:
Figure BDA0002094564320000074
obtaining a double-graph regularized updating rule as shown in the formula (20) through transposition and division:
Figure BDA0002094564320000075
wherein W is r Representing the weight matrix of the graph, D r Representing a weight matrix W r Is a diagonal matrix of (a), and
Figure BDA0002094564320000076
r=1 or 2, representing the code numbers corresponding to the two figures; i represents the ith row of the weight matrix and j represents the jth column of the weight matrix.
Based on the above, in the hyperspectral image unmixing method provided by the invention, hyperspectral images (corresponding data matrix) to be unmixed are obtained N ) And constructing a nearest neighbor graph by searching k nearest neighbors of each pixel point in the high-dimensional data matrix, taking each pixel point in the nearest neighbor graph as a vector, and constructing another graph in a sparse area by setting a sparse distance between the pixel points to finish the construction of the double graph. In this process, the sparse distance is set according to the actual situation, for example, the value is 1/2.
After the construction of the double graph is completed, a loss function C based on a non-negative matrix factorization as shown in a formula (4) under the sparse constraint is established; then, according to the established loss function, carrying out layer-by-layer decomposition on the data matrix N to obtain a corresponding matrix factor, and in each decomposition process, adjusting the matrix factor by using a fine adjustment rule shown as a formula (14) after decomposing to the last layer, and carrying out iterative updating by using a rule shown as a formula (20) after adjusting; after iterating to the preset times (preset according to the requirement), outputting a final iterated result to obtain an end member matrix U and an abundance matrix V after unmixing the hyperspectral image, and completing unmixing of the hyperspectral image.
In the example, the hyperspectral image is a remote sensing image with a mixed graph area, after an image matrix of a specific area to be unmixed is determined, a nearest neighbor graph is formed according to k nearest neighbors, a double graph is further constructed by adopting the method of the invention, a corresponding loss function is built for decomposition, and the coincident image can be separated. Furthermore, the relevant applications can also be carried out in medical imaging.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the invention, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the invention.

Claims (1)

1. A sparse dual constrained hyperspectral image unmixing method, comprising:
s10, obtaining a hyperspectral image to be unmixed, and constructing a nearest neighbor graph by searching k nearest neighbors of each pixel point in a high-dimensional data matrix;
s20, taking each pixel point in the nearest neighbor graph constructed in the step S10 as a vector, and constructing another graph in a sparse area by a method of setting sparse distances among the pixel points;
s30, according to the two graphs constructed in the step S10 and the step S20, establishing a loss function C based on nonnegative matrix factorization under sparse constraint:
Figure FDA0003956604660000011
wherein N represents a data matrix corresponding to the hyperspectral image, U represents an end member matrix, V represents an abundance matrix, and mu represents a measure of V 1/2 Coherence coefficient, L, of contribution to loss function C 1 And L 2 Laplacian matrixes corresponding to the two graphs, wherein gamma represents a graph constraint coefficient for measuring graph constraint, tr (·) represents a trace of the matrix, and beta represents a balance factor of a characteristic space and a sparse region;
s40, decomposing the data matrix N layer by layer to obtain corresponding matrix factors, adjusting the matrix factors by using a fine adjustment rule after decomposing to the last layer, and carrying out iterative updating after adjustment;
s50, outputting a final iteration result, obtaining an end member matrix U and an abundance matrix V after unmixing the hyperspectral image, and finishing unmixing the hyperspectral image;
in step S40, the fine tuning rule for the matrix factor in each layer decomposition is:
Figure FDA0003956604660000012
Figure FDA0003956604660000013
wherein s represents the number of layers of the current non-negative matrix factorization; psi phi type s-1 Represents the result of the s-1 layer matrix decomposition, and ψ s-1 =U 1 U 2 ...U s-1 ,U 1 、U 2 、...、U s-1 、U s Representing the end member matrix resulting from the decomposition of the corresponding number of layers,
Figure FDA0003956604660000014
χ 1 represents a balance constraint parameter on the abundance matrix, +.>
Figure FDA0003956604660000015
Representing reconstruction of the s-th layer of the abundance matrix V s Representing the abundance matrix obtained by decomposing the s layer, < >>
Figure FDA0003956604660000016
A conjugate matrix representing the data matrix N;
in step S40, the iterative updating rule of the end member matrix U and the abundance matrix V is:
U←U.*NV T ./UVV T
Figure FDA0003956604660000021
wherein W is r Representing the weight matrix of the graph, D r Representing a weight matrix W r And L is a diagonal matrix of r =D r -W r
Figure FDA0003956604660000022
r=1 or 2, representing the code numbers corresponding to the two figures; i represents the ith row of the weight matrix and j represents the jth column of the weight matrix.
CN201910514472.1A 2019-06-14 2019-06-14 Sparse dual constraint hyperspectral image unmixing method Active CN110363712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910514472.1A CN110363712B (en) 2019-06-14 2019-06-14 Sparse dual constraint hyperspectral image unmixing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910514472.1A CN110363712B (en) 2019-06-14 2019-06-14 Sparse dual constraint hyperspectral image unmixing method

Publications (2)

Publication Number Publication Date
CN110363712A CN110363712A (en) 2019-10-22
CN110363712B true CN110363712B (en) 2023-06-23

Family

ID=68216100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910514472.1A Active CN110363712B (en) 2019-06-14 2019-06-14 Sparse dual constraint hyperspectral image unmixing method

Country Status (1)

Country Link
CN (1) CN110363712B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841875B (en) * 2022-04-22 2023-08-11 哈尔滨师范大学 Hyperspectral image unmixing method based on graph learning and noise reduction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014021996A1 (en) * 2012-08-03 2014-02-06 Raytheon Company System and method for reduced incremental spectral clustering
CN104392243A (en) * 2014-11-18 2015-03-04 西北工业大学 Nonlinear un-mixing method of hyperspectral images based on kernel sparse nonnegative matrix decomposition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014021996A1 (en) * 2012-08-03 2014-02-06 Raytheon Company System and method for reduced incremental spectral clustering
CN104392243A (en) * 2014-11-18 2015-03-04 西北工业大学 Nonlinear un-mixing method of hyperspectral images based on kernel sparse nonnegative matrix decomposition

Also Published As

Publication number Publication date
CN110363712A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2020140421A1 (en) Computer-implemented method of training convolutional neural network, convolutional neural network, computer-implemented method using convolutional neural network, apparatus for training convolutional neural network, and computer-program product
US9317929B2 (en) Decomposition apparatus and method for refining composition of mixed pixels in remote sensing images
CN104952050B (en) High spectrum image adaptive de mixing method based on region segmentation
US10217018B2 (en) System and method for processing images using online tensor robust principal component analysis
CN108171279B (en) Multi-view video adaptive product Grassmann manifold subspace clustering method
US10268931B2 (en) Spatiotemporal method for anomaly detection in dictionary learning and sparse signal recognition
Li et al. Exploring compositional high order pattern potentials for structured output learning
Sigurdsson et al. Blind Hyperspectral Unmixing Using Total Variation and $\ell_q $ Sparse Regularization
CN108510013B (en) Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
CN115311187B (en) Hyperspectral fusion imaging method, system and medium based on internal and external prior
Sureau et al. Deep learning for a space-variant deconvolution in galaxy surveys
Griffith et al. Implementing Moran eigenvector spatial filtering for massively large georeferenced datasets
Borsoi et al. Kalman filtering and expectation maximization for multitemporal spectral unmixing
CN110689065A (en) Hyperspectral image classification method based on flat mixed convolution neural network
CN110363712B (en) Sparse dual constraint hyperspectral image unmixing method
Xiong et al. NMF-SAE: An interpretable sparse autoencoder for hyperspectral unmixing
CN109948462B (en) Hyperspectral image rapid classification method based on multi-GPU cooperative interaction data stream organization
Zhang et al. Covariance estimation for matrix-valued data
Carassou et al. Inferring the photometric and size evolution of galaxies from image simulations-I. Method
CN112504975B (en) Hyperspectral unmixing method based on constrained nonnegative matrix factorization
CN111062888A (en) Hyperspectral image denoising method based on multi-target low-rank sparsity and spatial-spectral total variation
Mesa et al. A distributed framework for the construction of transport maps
CN112949698B (en) Hyperspectral unmixing method for similarity constraint of non-local low-rank tensor
CN110992245B (en) Hyperspectral image dimension reduction method and device
Sigurdsson et al. Blind Nonlinear Hyperspectral Unmixing Using an $\ell_ {q} $ Regularizer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant