CN103745487A - Bayes high-spectral unmixing compressive sensing method based on structured sparsity prior - Google Patents

Bayes high-spectral unmixing compressive sensing method based on structured sparsity prior Download PDF

Info

Publication number
CN103745487A
CN103745487A CN201310713709.1A CN201310713709A CN103745487A CN 103745487 A CN103745487 A CN 103745487A CN 201310713709 A CN201310713709 A CN 201310713709A CN 103745487 A CN103745487 A CN 103745487A
Authority
CN
China
Prior art keywords
matrix
data
wavelet coefficient
abundances
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310713709.1A
Other languages
Chinese (zh)
Other versions
CN103745487B (en
Inventor
魏巍
张艳宁
张磊
李飞
孟庆洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310713709.1A priority Critical patent/CN103745487B/en
Publication of CN103745487A publication Critical patent/CN103745487A/en
Application granted granted Critical
Publication of CN103745487B publication Critical patent/CN103745487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses a Bayes high-spectral unmixing compressive sensing method based on a structured sparsity prior and aims at solving a technical problem that prior high-spectral image compressive sensing method which unites with spectral unmixing is poor in precision. The technical scheme is that in a compressive process, inherent low-rank property of high-spectral data is explored through linear unmixing and wavelet transformation is used to convert an abundance value into a structured sparse signal and then compressive sensing is used to obtain compressive data; and in a reconstruction process, an appropriate end-member matrix is selected from a spectral library and the structured sparse prior of an abundance-value wavelet factor is introduced and then a Bayes inference method based on Gibbs sampling is used to establish an abundance-value matrix precisely and at last a linear mixing model is used to reconstruct original data. Compared with a background technology compressive-sensing-type algorithm, the precision of the Bayes high-spectral unmixing compressive sensing method based on the structured sparsity prior is improved by about 10%.

Description

The mixed compression sensing method of the high spectrum solution of Bayes based on structural sparse priori
Technical field
The present invention relates to the mixed compression sensing method of a kind of high spectrum solution, particularly relate to the mixed compression sensing method of the high spectrum solution of a kind of Bayes based on structural sparse priori.
Background technology
Because high spectrum image data volume is huge, the compression algorithm of high compression ratio, has very profound significance for collection, transmission and the processing of high spectrum image.Current existing Hyperspectral image compression algorithm is mainly divided into two classes, one class is traditional compression method based on information coding, these class methods are to expand from the compression method of normal image, laying particular emphasis on the redundancy of removing in high spectrum image between each wave band inside and wave band compresses, comprising Impulsive Difference coding (the Cluster-based differential pulse-code modulation of cluster, CDPCM), 3 D wavelet transformation (3dimension discrete wavelet transformation, 3D-DWT), Three-dimensional DCT (3dimension discrete cosine transformation, 3D-DCT) etc., but this class compression algorithm can only be compressed after Image Acquisition, still need a large amount of hardware to gather and temporarily store data, and ratio of compression is low conventionally, another kind of method is the compression method based on compressed sensing (Compressive Sensing), these class methods can be by gathering the original sparse signal of a small amount of sample point Exact Reconstruction, the hardware consumption needing for minimizing image data provides theoretical foundation, and the method more lays particular emphasis on process of reconstruction.
Document " A compressive sensing and unmixing scheme for hyperspectral data processing; IEEE Transactions on Image Processing; 2012,21 (3): 1200 – 1210 " discloses the mixed Compression of hyperspectral images cognitive method of a kind of combined spectral solution.The method is mixed high-spectral data is decomposed into Abundances matrix and end member matrix by linear solution; The gradient of Abundances matrix on Spatial Dimension has sparse property, can adopt the mode of compressed sensing to compress reconstruction.Specific practice is first to use compressed sensing matrix to obtain packed data; In conjunction with library of spectra, select suitable end member; Use afterwards the method for compressed sensing to rebuild the Abundances matrix that comprises sparse gradient; Finally, use linear mixed model to rebuild the high-spectral data that needs collection.But the sparse property of gradient can not be excavated the sparse property of the deeper structure of Abundances matrix, and this article uses the method for augmentation Lagrange to rebuild Abundances, the proportion coefficients that the method can not the sparse property of adaptively selected gradient regular terms.These two factors have directly affected the reconstruction precision of this compression algorithm.
Summary of the invention
In order to overcome the deficiency of the mixed Compression of hyperspectral images cognitive method low precision of existing combined spectral solution, the invention provides the mixed compression sensing method of the high spectrum solution of a kind of Bayes based on structural sparse priori.The method compression process is mixed the low-rank character of excavating high-spectral data inherence by linear solution, use wavelet transformation to convert Abundances to structural sparse signal, uses afterwards compressed sensing to obtain packed data.Process of reconstruction, from library of spectra, select suitable end member matrix, introduce the structural sparse priori of Abundances wavelet coefficient, then use the Bayesian inference method Exact Reconstruction Abundances matrix based on gibbs sampler, finally use linear mixed model to rebuild raw data.In the urban data that data of the present invention are taken at HYDICE satellite, when ratio of compression is 1:16, normalized square error is less than 0.4, in the Indiana data of taking at AVIRIS, when ratio of compression is 1:16, normalized mean squared error is less than 0.4 equally, with respect to background technology compressed sensing class arithmetic accuracy, promotes 10% left and right.
The technical solution adopted for the present invention to solve the technical problems is: the high spectrum solution of a kind of Bayes based on structural sparse priori is mixed compression sensing method, is characterized in comprising the following steps:
Step 1, for a panel height spectroscopic data
Figure BDA0000442590680000021
n prepresent the total number of pixels comprising on high-spectral data space, n brepresent the wave band number that high-spectral data comprises, x i, i=1 ..., n bthe image that represents each wave band launches the column vector forming by row.Each pixel is at n bdifferent reflected values on individual wave band have formed the spectrum of this pixel.This spectrum adopts linear mixed model to describe.Linear mixed model thinks that any one mixed spectra is the linear combination of all end members, and end member shared ratio in mixed spectra is called Abundances.X represents the linear combination of Abundances matrix and end member matrix:
X=HW (1)
Wherein,
Figure BDA0000442590680000022
for Abundances matrix,
Figure BDA0000442590680000023
represent i the ratio that end member is shared in the mixed spectra of each pixel, n erepresent the number of end member,
Figure BDA0000442590680000024
for end member matrix, wherein
Figure BDA0000442590680000025
represent each end member.
For Abundances, use wavelet transform:
Θ=Ψ TH (2)
Y=Ψ TX=Ψ THW=ΘW
Wherein, for wavelet basis,
Figure BDA0000442590680000027
the wavelet coefficient that represents Abundances, Y represents the wavelet coefficient of whole data.
Figure BDA0000442590680000028
represent h iwavelet coefficient, there is the sparse property of structure.
Step 2, utilize wavelet coefficient to rebuild Abundances matrix, finally realize the reconstruction of whole data:
1, use the stochastic variable that meets Gaussian distribution to produce m × n pthe matrix of size, and each row is normalized and obtains random perception matrix
Figure BDA0000442590680000029
raw data after using Φ to wavelet transformation is sampled, and obtains packed data
Figure BDA0000442590680000031
F=ΦY=ΦΨ TX (3)
Wherein, m represents that to length be n psignal compression after data length, general m < n p.
2, for the limited scene of yardstick, ignore the impact of environmental factor on spectrum, suppose that end member is Limited Number.Use ASTER library of spectra, for specific scene selectively from library of spectra extracting part divide spectrum as end member.Final definite n that comprises ethe matrix W of individual end member.
3, the data estimation Abundances matrix of the mode of employing Bayesian inference from compression.
(1), by formula (2) and formula (3), utilize the wavelet coefficient of least square method acquisition Abundances matrix by the data after compressing separately
Figure BDA0000442590680000032
G=FW t(WW t) -1=Φ Θ+N (4) wherein,
Figure BDA0000442590680000033
the noise item that represents to utilize least square method and introduce.
(2) obtain the vector form g of formula (4) i=Φ θ i+ n i, suppose noise n iobeying average is 0, and covariance is
Figure BDA0000442590680000038
gaussian distribution
Figure BDA0000442590680000034
and independent same distribution between each noise vector, obtain θ i, i=1,2 ..., n elikelihood function:
p ( g i | &theta; i , &beta; i - 1 ) = ( 2 &pi; &beta; i - 1 ) - m 2 exp ( - 1 2 &beta; i - 1 | | g i - &Phi;&theta; i | | 2 2 ) - - - ( 5 )
β ifor the precision of noise vector, specify β ipriori be that following gamma distributes:
p(β i00)=Gamma(κ 00) (6)
κ 0=10 -6, υ 0=10 -6for form parameter and the scale parameter of gamma distribution.
(3) set up θ iin the structural sparse priori of each element.Structural sparse priori is to be mainly reflected in the tree construction of wavelet coefficient, and when parent node is 0, its child node is also probably 0, and vice versa:
p ( &theta; s , ji | &beta; s , i - 1 ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( 0 , &beta; s , i - 1 ) - - - ( 7 )
Wherein, s represents the yardstick s=1 in wavelet transformation ..., L, L is the total number of yardstick, θ s, jirepresent θ ij the element at yardstick s place.δ 0it is a distribution that concentrates on 0 place completely.π s, jibe a scale-up factor, work as π s, jibe tending towards at 0 o'clock, θ s, ji0, otherwise, π s, jibe tending towards at 1 o'clock, θ s, jicome from a Gaussian distribution.π s, jitree construction by wavelet transformation is determined:
&pi; s , ji = &pi; r ifs = 1 &pi; s 0 if 2 &le; s &le; L , &theta; pa ( s , ji ) = 0 &pi; s 1 if 2 &le; s &le; L , &theta; pa ( s , ji ) &NotEqual; 0 - - - ( 8 )
θ pa (s, ji)θ s, jiparent node, work as θ pa (s, ji)=0 o'clock, π s, jibe made as high probability and be 0
Figure BDA0000442590680000041
work as θ pa (s, ji)≠ 0 o'clock, π s, jibe made as high probability and be 1
Figure BDA0000442590680000042
β s,iθ s, jithe precision of Gaussian distribution in priori.π r, and β s,iprior distribution as follows:
&pi; r ~ Beta ( &epsiv; 0 r , &gamma; 0 r ) , &pi; s 0 ~ Beta ( &epsiv; 0 s , &gamma; 0 s ) , &pi; s 1 ~ Beta ( &epsiv; 1 s , &gamma; 1 s ) , &beta; s , i ~ Gamma ( &kappa; 1 , &upsi; 1 ) - - - ( 9 )
Wherein, &kappa; 1 = 10 - 6 , &upsi; 1 = 10 - 6 , [ &epsiv; 0 r , &gamma; 0 r ] = [ 0.9,0.1 ] &times; N 1 , [ &epsiv; 0 s , &gamma; 0 s , ] = [ 1 / n p , 1 / n p ] &times; N s ,
Figure BDA0000442590680000046
n srepresent wavelet transformation s=1 ..., the number of the wavelet coefficient of L layer.
(4) sampling wavelet coefficient θ s, ji.By above-mentioned (2), (3) step, obtained wavelet coefficient θ s, jilikelihood function, θ s, jistructural sparse priori, and the prior distribution of other unknown parameters.Need the wavelet coefficient Θ estimating, by following Bayesian inference, obtain:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ (10)
Wherein, &Omega; = { { &beta; s , i } i = 1 : n e s = 2 : L , &pi; r , { &pi; 0 s , &pi; 1 s } s = 2 : L } , &beta; = [ &beta; 1 , &beta; 2 , . . . , &beta; n e ] T For all noise precision, suppose the element independent same distribution in β and Ω, p (β) and p (Ω) are as follows respectively:
p ( &beta; ) = &Pi; i = 1 n e Gamma ( &kappa; 0 , &upsi; 0 ) , p ( &Omega; ) = &Pi; i = 1 n e &Pi; s = 1 L Gamma ( &kappa; 1 , &upsi; 1 ) &times; Beta ( &epsiv; 0 r , &gamma; 0 r ) &times; &Pi; s = 1 L Beta ( &epsiv; 0 s , &gamma; 0 s ) Beta ( &epsiv; 1 s , &gamma; 1 s ) - - - ( 11 )
The Markov monte carlo method of employing based on gibbs sampler, needs the posteriority of predictor to distribute by being similar to from corresponding condition distribution sample drawn.θ s, jigibbs sampler as follows:
p ( &theta; s , ji | ~ ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( &mu; s , ji , &beta; s , ji - 1 ) &beta; s , ji = &beta; s , i + &beta; i &Phi; j T &Phi; j , &mu; sji = &beta; s , ji - 1 &Phi; j T [ g i - &Sigma; k &NotEqual; j n p &Phi; k T &Phi; k ] - - - ( 12 )
Wherein, other parameters that~expression condition needs in distributing, Φ jrepresent Φ jin j row.
(5) sampling noiset precision β i.Sample distribution meets following gamma and distributes:
p ( &beta; i | ~ ) = Gamma ( &kappa; &beta; i , &upsi; &beta; i ) , &kappa; &beta; i = &kappa; 0 + m 2 , &upsi; &beta; i = &upsi; 0 + 1 2 ( g i - &Phi;&theta; i ) T ( g i - &Phi;&theta; i ) - - - ( 13 )
(6) the super parameter of sampling
Figure BDA00004425906800000411
concrete sample distribution is as follows:
p ( &beta; s , i | ~ ) = Gamma ( &kappa; &beta; s , i , &upsi; &beta; s , i ) , &kappa; &beta; s , i = &kappa; 1 + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &upsi; &beta; s , j = &upsi; 1 + 1 2 &Sigma; j = 1 N s &theta; s , ji 2 - - - ( 14 )
p ( &pi; r | ~ ) = Gamma ( &epsiv; &pi; r , &gamma; &pi; r ) , &epsiv; &pi; r = &epsiv; 0 r + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &gamma; &pi; r = &gamma; 0 r + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 ) - - - ( 15 )
p ( &pi; s , i 0 | ~ ) = Gamma ( &epsiv; &pi; s , i 0 , &gamma; &pi; s , i 0 ) &epsiv; &pi; s , i 0 = &epsiv; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) = 0 ) , &gamma; &pi; s , i 0 = &gamma; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) = 0 ) - - - ( 16 )
p ( &pi; s , i 1 | ~ ) = Gamma ( &epsiv; &pi; s , i 1 , &gamma; &pi; s , i 1 ) &epsiv; &pi; s , i 1 = &epsiv; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) , &gamma; &pi; s , i 1 = &gamma; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) - - - ( 17 )
Wherein, 1 (x) is 1 at x when being genuine, otherwise is 0.
(7) for each wavelet coefficient circulation in Θ successively execution (4), (5), (6) three steps, sample, until meet the condition of convergence or end condition, obtain the final wavelet coefficient of estimating
Figure BDA0000442590680000055
m sfor total number of sampling,
Figure BDA0000442590680000056
for the result of sampling each time.The end condition using is iterations, wherein, and sampling thief stabilization process 400 times, sampling process 200 times.
4, according to linear mixed model (1), obtain the high-spectral data of rebuilding
Figure BDA0000442590680000057
The invention has the beneficial effects as follows: the method compression process is mixed the low-rank character of excavating high-spectral data inherence by linear solution, use wavelet transformation to convert Abundances to structural sparse signal, use afterwards compressed sensing to obtain packed data.Process of reconstruction, from library of spectra, select suitable end member matrix, introduce the structural sparse priori of Abundances wavelet coefficient, then use the Bayesian inference method Exact Reconstruction Abundances matrix based on gibbs sampler, finally use linear mixed model to rebuild raw data.In the urban data that data of the present invention are taken at HYDICE satellite, when ratio of compression is 1:16, normalized square error is less than 0.4, in the Indiana data of taking at AVIRIS, when ratio of compression is 1:16, normalized mean squared error is less than 0.4 equally, with respect to background technology compressed sensing class arithmetic accuracy, promotes 10% left and right.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The mixed compression sensing method concrete steps of the high spectrum solution of Bayes that the present invention is based on structural sparse priori are as follows:
For a panel height spectroscopic data n prepresent the total number of pixels comprising on high-spectral data space, n brepresent the wave band number that high-spectral data comprises, x i, i=1 ..., n bthe image that represents each wave band launches the column vector forming by row.Each pixel is at n bdifferent reflected values on individual wave band have formed the spectrum of this pixel.Conventionally, unique spectrum that pure material has is known as end member.Due to factor impacts such as low spatial resolutions, the spectrum of pixel is the mixing of multiple pure object spectrum often.This spectral mixing phenomenon can be described with linear mixed model or non-linear mixture model.Only pay close attention to linear mixed model herein.This model thinks that any one mixed spectra is the linear combination of all end members, and end member shared ratio in mixed spectra is called Abundances.X can be expressed as the linear combination of Abundances matrix and end member matrix, as follows:
X=HW (1) wherein, for Abundances matrix,
Figure BDA0000442590680000062
represent i the ratio that end member is shared in the mixed spectra of each pixel, n erepresent the number of end member,
Figure BDA0000442590680000063
for end member matrix, wherein
Figure BDA0000442590680000064
represent each end member.Because adjacent pixel has approximate Abundances to each end member, therefore, Abundances matrix has been inherited the similarity of neighbor on space.In order excavating, to there is the sparse signal of structure from abundance value matrix, for Abundances, to use wavelet transform (DWT):
Θ=Ψ TH (2)
Y=Ψ TX=Ψ THW=ΘW
Wherein, for wavelet basis,
Figure BDA0000442590680000066
the wavelet coefficient that represents Abundances, Y represents the wavelet coefficient of whole data.
Figure BDA0000442590680000067
represent h iwavelet coefficient, there is the sparse property of structure.Need to use the wavelet coefficient that compressed sensing reconstruction is sparse for packed data exactly herein, utilize afterwards wavelet coefficient to rebuild Abundances matrix, finally realize the reconstruction of whole data, concrete steps are as follows:
1, packed data obtains.
The stochastic variable that use meets Gaussian distribution produces m × n pthe matrix of size, and each row is normalized and obtains random perception matrix
Figure BDA0000442590680000068
raw data after using Φ to wavelet transformation is sampled, and obtains packed data as follows:
F=Φ Y=Φ Ψ twherein, m represents that to length be n to X (3) psignal compression after data length, general m < n p.
2, end member is selected.
For the limited scene of yardstick, ignore the impact of environmental factor on spectrum, suppose that end member is Limited Number.Use ASTER library of spectra herein, for specific scene selectively from library of spectra extracting part divide spectrum as end member.Final definite n that comprises ethe matrix W of individual end member.Conventionally in fixed scene, only comprise a small amount of end member, therefore by linear solution, mix the low-rank characteristic that can excavate high-spectral data itself.
3, Bayesian inference is rebuild Abundances matrix.
For the Exact Reconstruction of raw data, adopt the mode of Bayesian inference to carry out the data estimation Abundances matrix from compression herein.Specific implementation process is divided into following step:
(1), by formula (2) and (3), utilize the wavelet coefficient of least square method (Least squares) acquisition Abundances matrix by the data after compressing separately
Figure BDA0000442590680000071
G=FW t(WW t) -1=Φ Θ+N (4) wherein, the noise item that represents to utilize least square method and introduce.
(2) Gauss's likelihood function of structure wavelet coefficient.Obtain the vector form g of formula (4) i=Φ θ i+ n i, suppose noise n iobeying average is 0, and covariance is
Figure BDA0000442590680000073
gaussian distribution
Figure BDA0000442590680000074
and independent same distribution between each noise vector, can obtain θ i, i=1,2 ..., n elikelihood function:
p ( g i | &theta; i , &beta; i - 1 ) = ( 2 &pi; &beta; i - 1 ) - m 2 exp ( - 1 2 &beta; i - 1 | | g i - &Phi;&theta; i | | 2 2 ) - - - ( 5 )
β ifor the precision of noise vector, specify β herein ipriori be that following gamma distributes:
p(β i00)=Gamma(κ 00) (6)
κ 0=10 -6, υ 0=10 -6for form parameter and the scale parameter of gamma distribution.
(3) set up θ iin the structural sparse priori of each element.Structural sparse priori is to be mainly reflected in the tree construction of wavelet coefficient, and when parent node is 0, its child node is also probably 0, and vice versa:
p ( &theta; s , ji | &beta; s , i - 1 ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( 0 , &beta; s , i - 1 ) - - - ( 7 )
Wherein, s represents the yardstick s=1 in wavelet transformation ..., L, L is the total number of yardstick, θ s, jirepresent θ ij the element at yardstick s place.δ 0it is a distribution that concentrates on 0 place completely.π s, jibe a scale-up factor, work as π s, jibe tending towards at 0 o'clock, θ s, ji0, otherwise, π s, jibe tending towards at 1 o'clock, θ s, jicome from a Gaussian distribution.π s, jitree construction by wavelet transformation is determined:
&pi; s , ji = &pi; r ifs = 1 &pi; s 0 if 2 &le; s &le; L , &theta; pa ( s , ji ) = 0 &pi; s 1 if 2 &le; s &le; L , &theta; pa ( s , ji ) &NotEqual; 0 - - - ( 8 )
θ pa (s, ji)θ s, jiparent node, work as θ pa (s, ji)=0 o'clock, π s, jibe made as high probability and be 0
Figure BDA0000442590680000081
work as θ pa (s, ji)≠ 0 o'clock, π s, jibe made as high probability and be 1
Figure BDA0000442590680000082
β s,iθ s, jithe precision of Gaussian distribution in priori.π r,
Figure BDA0000442590680000083
and β s,iprior distribution as follows:
&pi; r ~ Beta ( &epsiv; 0 r , &gamma; 0 r ) , &pi; s 0 ~ Beta ( &epsiv; 0 s , &gamma; 0 s ) , &pi; s 1 ~ Beta ( &epsiv; 1 s , &gamma; 1 s ) , &beta; s , i ~ Gamma ( &kappa; 1 , &upsi; 1 ) - - - ( 9 )
Wherein, &kappa; 1 = 10 - 6 , &upsi; 1 = 10 - 6 , [ &epsiv; 0 r , &gamma; 0 r ] = [ 0.9,0.1 ] &times; N 1 , [ &epsiv; 0 s , &gamma; 0 s , ] = [ 1 / n p , 1 / n p ] &times; N s ,
Figure BDA0000442590680000086
n srepresent wavelet transformation s=1 ..., the number of the wavelet coefficient of L layer.
(4) sampling wavelet coefficient θ s, ji.By above-mentioned (2), (3) step, obtained wavelet coefficient θ s, jilikelihood function, θ s, jistructural sparse priori, and the prior distribution of other unknown parameters.Need the wavelet coefficient Θ estimating, can obtain by following Bayesian inference:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ (10)
Wherein, &Omega; = { { &beta; s , i } i = 1 : n e s = 2 : L , &pi; r , { &pi; 0 s , &pi; 1 s } s = 2 : L } , &beta; = [ &beta; 1 , &beta; 2 , . . . , &beta; n e ] T For all noise precision, suppose the element independent same distribution in β and Ω, p (β) and p (Ω) are as follows respectively:
p ( &beta; ) = &Pi; i = 1 n e Gamma ( &kappa; 0 , &upsi; 0 ) , p ( &Omega; ) = &Pi; i = 1 n e &Pi; s = 1 L Gamma ( &kappa; 1 , &upsi; 1 ) &times; Beta ( &epsiv; 0 r , &gamma; 0 r ) &times; &Pi; s = 1 L Beta ( &epsiv; 0 s , &gamma; 0 s ) Beta ( &epsiv; 1 s , &gamma; 1 s ) - - - ( 11 )
Because formula (10) is difficult to solve, so adopt the Markov monte carlo method based on gibbs sampler herein.By being similar to from corresponding condition distribution sample drawn, need the posteriority of predictor to distribute.θ s, jigibbs sampler as follows:
p ( &theta; s , ji | ~ ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( &mu; s , ji , &beta; s , ji - 1 ) &beta; s , ji = &beta; s , i + &beta; i &Phi; j T &Phi; j , &mu; sji = &beta; s , ji - 1 &Phi; j T [ g i - &Sigma; k &NotEqual; j n p &Phi; k T &Phi; k ] - - - ( 12 )
Wherein, other parameters that~expression condition needs in distributing, Φ jrepresent Φ jin j row.
(5) sampling noiset precision β i.Sample distribution meets following gamma and distributes:
p ( &beta; i | ~ ) = Gamma ( &kappa; &beta; i , &upsi; &beta; i ) , &kappa; &beta; i = &kappa; 0 + m 2 , &upsi; &beta; i = &upsi; 0 + 1 2 ( g i - &Phi;&theta; i ) T ( g i - &Phi;&theta; i ) - - - ( 13 )
(6) the super parameter of sampling
Figure BDA00004425906800000811
concrete sample distribution is as follows:
p ( &beta; s , i | ~ ) = Gamma ( &kappa; &beta; s , i , &upsi; &beta; s , i ) , &kappa; &beta; s , i = &kappa; 1 + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &upsi; &beta; s , j = &upsi; 1 + 1 2 &Sigma; j = 1 N s &theta; s , ji 2 - - - ( 14 )
p ( &pi; r | ~ ) = Gamma ( &epsiv; &pi; r , &gamma; &pi; r ) , &epsiv; &pi; r = &epsiv; 0 r + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &gamma; &pi; r = &gamma; 0 r + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 ) - - - ( 15 )
p ( &pi; s , i 0 | ~ ) = Gamma ( &epsiv; &pi; s , i 0 , &gamma; &pi; s , i 0 ) &epsiv; &pi; s , i 0 = &epsiv; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) = 0 ) , &gamma; &pi; s , i 0 = &gamma; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) = 0 ) - - - ( 16 )
p ( &pi; s , i 1 | ~ ) = Gamma ( &epsiv; &pi; s , i 1 , &gamma; &pi; s , i 1 ) &epsiv; &pi; s , i 1 = &epsiv; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) , &gamma; &pi; s , i 1 = &gamma; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) - - - ( 17 )
Wherein, 1 (x) is 1 at x when being genuine, otherwise is 0.
(7) for each wavelet coefficient circulation in Θ successively execution (4), (5), (6) three steps, sample, until meet the condition of convergence or end condition, obtain the final wavelet coefficient of estimating
Figure BDA0000442590680000095
m sfor total number of sampling,
Figure BDA0000442590680000096
for the result of sampling each time.End condition used herein is iterations, wherein, and sampling thief stabilization process 400 times, sampling process 200 times.
4, rebuild high-spectral data.
According to linear mixed model (1), obtain the high-spectral data of rebuilding
Figure BDA0000442590680000097

Claims (1)

1. the mixed compression sensing method of the high spectrum solution of the Bayes based on structural sparse priori, is characterized in that comprising the following steps:
Step 1, for a panel height spectroscopic data
Figure DEST_PATH_FDA0000465171070000011
n prepresent the total number of pixels comprising on high-spectral data space, n brepresent the wave band number that high-spectral data comprises, x i, i=1 ..., n bthe image that represents each wave band launches the column vector forming by row; Each pixel is at n bdifferent reflected values on individual wave band have formed the spectrum of this pixel; This spectrum adopts linear mixed model to describe; Linear mixed model thinks that any one mixed spectra is the linear combination of all end members, and end member shared ratio in mixed spectra is called Abundances; X represents the linear combination of Abundances matrix and end member matrix:
X=HW (1)
Wherein,
Figure DEST_PATH_FDA0000465171070000012
for Abundances matrix,
Figure DEST_PATH_FDA0000465171070000013
represent i the ratio that end member is shared in the mixed spectra of each pixel, n erepresent the number of end member,
Figure DEST_PATH_FDA0000465171070000014
for end member matrix, wherein
Figure DEST_PATH_FDA0000465171070000015
represent each end member;
For Abundances, use wavelet transform:
Θ=Ψ TH (2)
Y=Ψ TX=Ψ THW=ΘW
Wherein,
Figure DEST_PATH_FDA0000465171070000016
for wavelet basis,
Figure DEST_PATH_FDA0000465171070000017
the wavelet coefficient that represents Abundances, Y represents the wavelet coefficient of whole data; represent h iwavelet coefficient, there is the sparse property of structure;
Step 2, utilize wavelet coefficient to rebuild Abundances matrix, finally realize the reconstruction of whole data:
The stochastic variable that step 1, use meet Gaussian distribution produces m × n pthe matrix of size, and each row is normalized and obtains random perception matrix
Figure DEST_PATH_FDA0000465171070000019
raw data after using Φ to wavelet transformation is sampled, and obtains packed data
Figure DEST_PATH_FDA00004651710700000110
F=ΦY=ΦΨ TX (3)
Wherein, m represents that to length be n psignal compression after data length, general m < n p;
Step 2, for the limited scene of yardstick, ignore the impact of environmental factor on spectrum, suppose that end member is Limited Number; Use ASTER library of spectra, for specific scene selectively from library of spectra extracting part divide spectrum as end member; Final definite n that comprises ethe matrix W of individual end member;
The data estimation Abundances matrix of the mode of step 3, employing Bayesian inference from compression;
(1), by formula (2) and formula (3), utilize the wavelet coefficient of least square method acquisition Abundances matrix by the data after compressing separately
G=FW T(WW T) -1=ΦΘ+N (4)
Wherein, the noise item that represents to utilize least square method and introduce;
(2) obtain the vector form g of formula (4) i=Φ θ i+ n i, suppose noise n iobeying average is 0, and covariance is
Figure DEST_PATH_FDA0000465171070000023
gaussian distribution
Figure DEST_PATH_FDA0000465171070000024
and independent same distribution between each noise vector, obtain θ i, i=1,2 ..., n elikelihood function:
Figure DEST_PATH_FDA0000465171070000025
β ifor the precision of noise vector, specify β ipriori be that following gamma distributes:
p(β i00)=Gamma(κ 00) (6)
κ 0=10 -6, υ 0=10 -6for form parameter and the scale parameter of gamma distribution;
(3) set up θ iin the structural sparse priori of each element; Structural sparse priori is to be mainly reflected in the tree construction of wavelet coefficient, and when parent node is 0, its child node is also probably 0, and vice versa:
Figure DEST_PATH_FDA0000465171070000026
Wherein, s represents the yardstick s=1 in wavelet transformation ..., L, L is the total number of yardstick, θ s, jirepresent θ ij the element at yardstick s place; δ 0it is a distribution that concentrates on 0 place completely; π s, jibe a scale-up factor, work as π s, jibe tending towards at 0 o'clock, θ s, ji0, otherwise, π s, jibe tending towards at 1 o'clock, θ s, jicome from a Gaussian distribution; π s, jitree construction by wavelet transformation is determined:
Figure DEST_PATH_FDA0000465171070000027
θ pa (s, ji)θ s, jiparent node, work as θ pa (s, ji)=0 o'clock, π s, jibe made as high probability and be 0
Figure DEST_PATH_FDA0000465171070000028
work as θ pa (s, ji)≠ 0 o'clock, π s, jibe made as high probability and be 1
Figure DEST_PATH_FDA00004651710700000210
β s,iθ s, jithe precision of Gaussian distribution in priori; π r,
Figure DEST_PATH_FDA0000465171070000029
and β s, iprior distribution as follows:
Figure DEST_PATH_FDA0000465171070000031
Wherein,
Figure DEST_PATH_FDA0000465171070000032
Figure DEST_PATH_FDA0000465171070000033
n srepresent wavelet transformation s=1 ..., the number of the wavelet coefficient of L layer;
(4) sampling wavelet coefficient θ s, ji; By above-mentioned (2), (3) step, obtained wavelet coefficient θ s, jilikelihood function, θ s, jistructural sparse priori, and the prior distribution of other unknown parameters; Need the wavelet coefficient Θ estimating, by following Bayesian inference, obtain:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ (10)
Wherein,
Figure DEST_PATH_FDA0000465171070000034
Figure DEST_PATH_FDA00004651710700000311
for all noise precision, suppose the element independent same distribution in β and Ω, p (β) and p (Ω) are as follows respectively:
Figure DEST_PATH_FDA0000465171070000035
The Markov monte carlo method of employing based on gibbs sampler, needs the posteriority of predictor to distribute by being similar to from corresponding condition distribution sample drawn; θ s, jigibbs sampler as follows:
Figure DEST_PATH_FDA0000465171070000036
Wherein, other parameters that~expression condition needs in distributing, Φ jrepresent Φ jin j row;
(5) sampling noiset precision β i; Sample distribution meets following gamma and distributes:
Figure DEST_PATH_FDA0000465171070000037
(6) the super parameter of sampling
Figure DEST_PATH_FDA0000465171070000038
concrete sample distribution is as follows:
Figure DEST_PATH_FDA00004651710700000310
Figure DEST_PATH_FDA0000465171070000041
Figure DEST_PATH_FDA0000465171070000042
Wherein, 1 (x) is 1 at x when being genuine, otherwise is 0;
(7) for each wavelet coefficient circulation in Θ successively execution (4), (5), (6) three steps, sample, until meet the condition of convergence or end condition, obtain the final wavelet coefficient of estimating m sfor total number of sampling,
Figure DEST_PATH_FDA0000465171070000044
for the result of sampling each time; The end condition using is iterations, wherein, and sampling thief stabilization process 400 times, sampling process 200 times;
Step 4, according to linear mixed model (1) obtain rebuild high-spectral data
CN201310713709.1A 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori Active CN103745487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310713709.1A CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310713709.1A CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Publications (2)

Publication Number Publication Date
CN103745487A true CN103745487A (en) 2014-04-23
CN103745487B CN103745487B (en) 2016-07-06

Family

ID=50502502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310713709.1A Active CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Country Status (1)

Country Link
CN (1) CN103745487B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103969634A (en) * 2014-04-29 2014-08-06 西安电子科技大学 Target attribute characteristic extraction method based on complete polarization attribute scattering center model
CN104463223A (en) * 2014-12-22 2015-03-25 西安电子科技大学 Hyperspectral image group sparse demixing method based on empty spectral information abundance restraint
CN104734724A (en) * 2015-03-16 2015-06-24 西北工业大学 Hyperspectral image compressed sensing method based on heavy weighting laplacian sparse prior
CN105427351A (en) * 2015-11-02 2016-03-23 西北工业大学 High spectral image compression sensing method based on manifold structuring sparse prior
CN106227705A (en) * 2016-09-20 2016-12-14 北京邮电大学 A kind of method and device of data collection
CN104091368B (en) * 2014-07-22 2017-01-11 西北工业大学 Hyperspectral demixing compressed sensing method based on spatial-spectral three-dimensional sparse prior
CN106504214A (en) * 2016-10-31 2017-03-15 西京学院 Wavelet transformation and the high spectrum image Banded improvement removing method of local interpolation fusion
CN106842172A (en) * 2016-12-22 2017-06-13 西北工业大学 A kind of submarine target structural sparse feature extracting method
CN107886113A (en) * 2017-10-27 2018-04-06 成都中星世通电子科技有限公司 A kind of extraction of electromagnetic spectrum noise and filtering algorithm based on Chi-square Test
CN112348123A (en) * 2020-12-08 2021-02-09 武汉卓尔数字传媒科技有限公司 User clustering method and device and electronic equipment
CN113435366A (en) * 2021-06-30 2021-09-24 南京理工大学 Multi-time hyperspectral image Bayesian unmixing method for wavelet domain

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359049B (en) * 2007-08-01 2012-05-23 北京师范大学 Remote sensing image fusion method
CN103024398A (en) * 2013-01-15 2013-04-03 山东大学 Sparse matrix based compressed sensing processing method for hyperspectral remote sensing images
CN103247034B (en) * 2013-05-08 2016-01-20 中国科学院光电研究院 A kind of compressed sensing high spectrum image reconstructing method based on sparse spectrum dictionary
CN103400341B (en) * 2013-07-03 2016-06-29 西安电子科技大学 Method based on the empty spectral domain integrated restoration high-spectral data of compressed sensing

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103969634B (en) * 2014-04-29 2016-08-24 西安电子科技大学 Objective attribute target attribute feature extracting method based on complete polarization attribute scattering center model
CN103969634A (en) * 2014-04-29 2014-08-06 西安电子科技大学 Target attribute characteristic extraction method based on complete polarization attribute scattering center model
CN104091368B (en) * 2014-07-22 2017-01-11 西北工业大学 Hyperspectral demixing compressed sensing method based on spatial-spectral three-dimensional sparse prior
CN104463223A (en) * 2014-12-22 2015-03-25 西安电子科技大学 Hyperspectral image group sparse demixing method based on empty spectral information abundance restraint
CN104734724A (en) * 2015-03-16 2015-06-24 西北工业大学 Hyperspectral image compressed sensing method based on heavy weighting laplacian sparse prior
CN104734724B (en) * 2015-03-16 2017-11-24 西北工业大学 Based on the Compression of hyperspectral images cognitive method for weighting Laplce's sparse prior again
CN105427351B (en) * 2015-11-02 2018-12-14 西北工业大学 Compression of hyperspectral images cognitive method based on manifold structure sparse prior
CN105427351A (en) * 2015-11-02 2016-03-23 西北工业大学 High spectral image compression sensing method based on manifold structuring sparse prior
CN106227705A (en) * 2016-09-20 2016-12-14 北京邮电大学 A kind of method and device of data collection
CN106504214A (en) * 2016-10-31 2017-03-15 西京学院 Wavelet transformation and the high spectrum image Banded improvement removing method of local interpolation fusion
CN106504214B (en) * 2016-10-31 2019-03-05 西京学院 The high spectrum image Banded improvement removing method of wavelet transformation and local interpolation fusion
CN106842172A (en) * 2016-12-22 2017-06-13 西北工业大学 A kind of submarine target structural sparse feature extracting method
CN106842172B (en) * 2016-12-22 2019-02-26 西北工业大学 A kind of submarine target structural sparse feature extracting method
CN107886113A (en) * 2017-10-27 2018-04-06 成都中星世通电子科技有限公司 A kind of extraction of electromagnetic spectrum noise and filtering algorithm based on Chi-square Test
CN107886113B (en) * 2017-10-27 2021-05-11 成都中星世通电子科技有限公司 Electromagnetic spectrum noise extraction and filtering method based on chi-square test
CN112348123A (en) * 2020-12-08 2021-02-09 武汉卓尔数字传媒科技有限公司 User clustering method and device and electronic equipment
CN113435366A (en) * 2021-06-30 2021-09-24 南京理工大学 Multi-time hyperspectral image Bayesian unmixing method for wavelet domain

Also Published As

Publication number Publication date
CN103745487B (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN103745487A (en) Bayes high-spectral unmixing compressive sensing method based on structured sparsity prior
US10410377B2 (en) Compressing a signal that represents a physical attribute
CN102708576B (en) Method for reconstructing partitioned images by compressive sensing on the basis of structural dictionaries
Lindstrom et al. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
CN103413292B (en) Based on the hyperspectral image nonlinear abundance estimation method of constraint least square
CN111598786B (en) Hyperspectral image unmixing method based on depth denoising self-coding network
Poltoratski Spectral gaps for sets and measures
CN102722866A (en) Compressive sensing method based on principal component analysis
Wang et al. Hybrid image denoising method based on non‐subsampled contourlet transform and bandelet transform
CN107633476A (en) A kind of watermark insertion and extracting method based on LWT SVD DCT algorithms
Yao et al. Research of incoherence rotated chaotic measurement matrix in compressed sensing
CN116626765A (en) Multichannel seismic deconvolution method based on two-dimensional K-SVD and convolution sparse coding
Lan et al. Seismic data reconstruction based on low dimensional manifold model
CN104113346A (en) Method of constructing measurement matrix based on cascade chaotic sequence
CN108734672A (en) Based on high-spectral data solution mixing method library of spectra cutting and cooperate with sparse regression
CN103957011A (en) Restoration method for compressed sensing signal with noise based on threshold value shrinkage iteration
CN105701845A (en) Hyperspectral image compression perception reconstruction method cooperating sparse measurement and 3D TV model
Hou et al. SAR image Bayesian compressive sensing exploiting the interscale and intrascale dependencies in directional lifting wavelet transform domain
Douglas et al. The generalized DMPK equation revisited: towards a systematic derivation
Xian-chuan et al. Remote sensing image fusion based on integer wavelet transformation and ordered nonnegative independent component analysis
Melikyan et al. Integrable theories and generalized graded Maillet algebras
Zhang et al. Lossless and lossy remote sensing image encryption-compression algorithm based on DeepLabv3+ and 2D CS
Xu et al. Volumetric data reduction in a compressed sensing framework
Li et al. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
CN105184832A (en) Image reconstruction design method improving noise variance estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant