CN106485014A - A kind of 1 bit compression Bayes's cognitive method of strong robustness - Google Patents

A kind of 1 bit compression Bayes's cognitive method of strong robustness Download PDF

Info

Publication number
CN106485014A
CN106485014A CN201610914950.4A CN201610914950A CN106485014A CN 106485014 A CN106485014 A CN 106485014A CN 201610914950 A CN201610914950 A CN 201610914950A CN 106485014 A CN106485014 A CN 106485014A
Authority
CN
China
Prior art keywords
expectation
exp
regard
sparse
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610914950.4A
Other languages
Chinese (zh)
Inventor
方俊
崔星星
万千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610914950.4A priority Critical patent/CN106485014A/en
Publication of CN106485014A publication Critical patent/CN106485014A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention belongs to signal detection and estimation (signal detection and estimation) technical field, more particularly to a kind of using deformation greatest hope (variational expectation maximization, V EM) algorithm while distinguished symbol reversion error and the robust Bayes's compression sensing method for estimating sparse signal.The present invention provides a kind of 1 bit compression Bayes's cognitive method of strong robustness.The present invention is by sign-inverted error modeling to be become the non-quantized observation of the disturbance of a sparse noise vector, and it is sparse to promote to apply inverse Gauss Gamma layering priori on the sparse noise vector, by using bayesian theory, you can complete the Combined estimator to sign-inverted error and sparse signal.Number and the position of sign-inverted error can be accurately determined by Combined estimator.

Description

A kind of 1 bit compression Bayes's cognitive method of strong robustness
Technical field
The invention belongs to signal detection and estimation (signal detection and estimation) technical field, special Algorithm is simultaneously using deformation greatest hope (variational expectation-maximization, V-EM) not to be related to one kind Distinguished symbol reversion error and robust Bayes's compression sensing method of estimation sparse signal.
Background technology
1 traditional bit compression perception algorithm all think 1 bit measurement be errorless, but due to signals collecting with transmission During can all introduce noise, therefore some bits can be inverted to and state contrary before, and this can cause these Traditional algorithm has sizable performance loss.
Some algorithms have been had to take into account sign-inverted problem, such as self adaptation exceptional value tracing algorithm, self adaptation at present Noise normalization fixed-point iteration algorithm etc., these algorithms can be automatically found sign-inverted error.But these algorithms are required for Know the number of reversion error, this can not possibly know in advance, and therefore the practicality of these algorithms is not high.In the present invention, By sign-inverted error modeling to be become the non-quantized observation of the disturbance of a sparse noise vector, so as to be managed using Bayes Error and sparse signal are inverted by accurately to determine.
Content of the invention
It is an object of the invention to provide a kind of 1 bit compression Bayes's cognitive method of strong robustness.The present invention passes through will Sign-inverted error modeling becomes the non-quantized observation of the disturbance of a sparse noise vector, and applies on the sparse noise vector Plus inverse Gauss-Gamma layering priori is sparse to promote, by using bayesian theory, you can complete to sign-inverted error and The Combined estimator of sparse signal.Number and the position of sign-inverted error can be accurately determined by Combined estimator.
Describe for convenience, first the present invention is introduced using system model and term:
If in 1 bit quantization problem, t=sign (y)=sign (Ax+w), wherein,See for binary system Measured value,For non-quantized original measurement value,For a sparse noise vector, i.e., only There is little nonzero coefficient, sign represents and sign function taken to vector element, if the element is more than 0, return 1, otherwise return 0.
K- sparse signal is randomly generated, and the supported collection of K- sparse signal is to be uniformly distributed to randomly select according to one 's.Calculation matrixIn each value be to randomly generate from the Gaussian Profile of a zero mean unit variance, and And its each row are all normalized.Sign-inverted error is also to be uniformly distributed to randomly generate according to one, wherein, m =200, n=100, K=10, L=10.
Perceive matrix:In order to line sampling is carried out to signal, play a part of dimensionality reduction, n dimensional signal is mapped to m dimension empty Between, usual m<<n.
Openness:Signal can be with some element linear expressions in one group of base or a dictionary.Represent it is essence when this True, it is sparse just to claim this signal.The information included by most high dimensional signal can be wrapped well below its dimension Contain, sparse signal model is that this high dimensional signal provides mathematical explanation.
Rarefaction representation:As soon as signal can be with some element linear expressions in group base, this group base is called sparse base.Sparse Expression of the signal under sparse base is the rarefaction representation of signal.
A kind of 1 bit compression Bayes's cognitive method of strong robustness, comprises the following steps that:
S1, perception matrix A with stochastical sampling property is constructed, sampling is carried out to signal and obtains y, step-up error preset value ε;
S2, the priori of construction parameters, Posterior distrbutionp:
T with regard to the Posterior distrbutionp of y is:Wherein, σ (yi)=1/ (1+exp (- Y) it is) logical function, and can leads,
The Gauss of x, w against Gamma priori is:
S3, structure object function, specially:
Variable δ between S31, introducing, according to Jaakkola-Jordon inequality
Wherein, z=(2t-1) y,
λ (δ)=(1/4 δ) tanh (δ/2), tanh (δ)=(exp (x)-exp (- x))/(exp (x)+exp (- x));
S32, structure alternative functions
S33, θ={ x, α, w, β } is made, build object function G (t, θ, δ)=F (t, x, w, δ) p (x | α) p (α) p (w | β) p (β);
S4, make q (θ)=qx(x)qα(α)qw(w)qβ(β), each parameter is updated using V-EM algorithm:
S41, renewal qx(x):
Wherein, Λα=diag (α1,...αn),
Λδ=diag (λ (δ1),...λ(δm)), then the mean variance of x is respectively Φx=(Λ<α>+2ATΛδA)-1, Λ<α>=diag (<α1>,…<αn>),<αi>For αiWith regard to being distributed qα(α) expectation;
S42, renewal qw(w):
Wherein, Λ<β>=diag (<β1>,...<βn>),< βi>For βiWith regard to being distributed qβ(β) expectation, therefore the mean variance of w be respectivelyΦw= (Λ<β>+2Λδ)-1
S43, renewal qα(α):
Wherein,ForWith regard to being distributed qxThe expectation of (x), Therefore α obeys following Gamma distribution:Wherein,Then αi's It is desired for
S44, renewal qβ(β):
Wherein,ForWith regard to being distributed qwThe expectation of (w), Therefore β obeys following Gamma distribution:WhereinThen αi's It is desired for
S45、RightDerivation, can obtainMake above formula obtain for 0
Wherein,<xxT>=μxμx Tx
If the above-mentioned iterative process of S6 meets end conditionStop iteration, otherwise returning S4 is carried out Next iteration.
The invention has the beneficial effects as follows:
The present invention requires no knowledge about the sparse degree of the prior information of sign-inverted number and sparse signal, while also without Other design parameters, in the case that bit reversal error numbers are more, the present invention compares other algorithms bigger performance Advantage.
Description of the drawings
Fig. 1 is each algorithm bit reversal error numbers L and NMSE, the relation of Hamming error.
Fig. 2 is each algorithm measurement number of times m and NMSE, the relation of Hamming error.
Specific embodiment
With reference to specific embodiment, the present invention is described in further detail.
S1, perception matrix A with stochastical sampling property is constructed, sampling is carried out to signal and obtains y, step-up error preset value ε;
S2, the priori of construction parameters, Posterior distrbutionp:
T with regard to the Posterior distrbutionp of y is:Wherein, σ (yi)=1/ (1+exp (- Y) it is) logical function, and can leads,
The Gauss of x, w against Gamma priori is:
S3, structure object function, specially:
Variable δ between S31, introducing, according to Jaakkola-Jordon inequality
Wherein, z=(2t-1) y,
λ (δ)=(1/4 δ) tanh (δ/2), tanh (δ)=(exp (x)-exp (- x))/(exp (x)+exp (- x));
S32, structure alternative functions
S33, θ={ x, α, w, β } is made, build object function G (t, θ, δ)=F (t, x, w, δ) p (x | α) p (α) p (w | β) p (β);
S4, make q (θ)=qx(x)qα(α)qw(w)qβ(β), each parameter is updated using V-EM algorithm:
S41, renewal qx(x):
Wherein, Λα=diag (α1,...αn),
Λδ=diag (λ (δ1),...λ(δm)), then the mean variance of x is respectively Φx=(Λ<α>+2ATΛδA)-1, Λ<α>=diag (<α1>,...<αn>),<αi>For αiWith regard to being distributed qα(α) expectation;
S42, renewal qw(w):
Wherein, Λ<β>=diag (<β1>,...<βn>),< βi>For βiWith regard to being distributed qβ(β) expectation, therefore the mean variance of w be respectivelyΦw= (Λ<β>+2Λδ)-1
S43, renewal qα(α):
Wherein,ForWith regard to being distributed qxThe phase of (x) Hope, therefore α obeys following Gamma distribution:Wherein,Then αiBe desired for
S44, renewal qβ(β):
Wherein,ForWith regard to being distributed qwThe expectation of (w), Therefore β obeys following Gamma distribution:WhereinThen αi's It is desired for
S45、RightDerivation, can obtainMake above formula obtain for 0 Wherein,<xxT>=μxμx Tx
If the above-mentioned iterative process of S6 meets end conditionStop iteration, otherwise returning S4 is carried out Next iteration.
Through aforesaid operations, the dual estimation to sparse signal and bit reversal error is just completed.
Below by other correlation techniques with the inventive method algorithm performance comparative analysis, to verify the present invention's further Performance.
Using two kinds of measurement indexs come the performance of metric algorithm.
One recovery accuracy for being used to weigh sparse signal, is called normalized mean squared error (Normalized Mean Squared Error, abbreviation NMSE);
One is used to weigh bit reversal error size, is called the definition of Hamming error (Hamming error) .NMSE ForThe definition of Hamming error is
M=200, n=100, K=10 in Fig. 1, compare those calculations for ignoring bit reversal error as can be seen from this figure Method (BIHT, 1-BCS), the present invention have bigger performance advantage;And those algorithms for considering bit reversal errors are compared, with The increase of bit reversal error, the robustness of this algorithm are higher.N=100, K=10, L=10 in Fig. 2, that is, have 10 bits anti- Turn error, from this figure, it can be seen that increase of the present invention with measurement number m, the performance of algorithm compare other algorithms have bigger Lifting.
To sum up, the inventive method is the 1 bit quantization error estimation based on compressed sensing, and which is by bit reversal error A sparse signal is modeled as, by well-designed iterative algorithm, each relevant parameter of more new algorithm, is requiring no knowledge about ratio On the premise of special reversion error numbers and position, Combined estimator is carried out to sparse signal and bit reversal error.In bit reversal In the case that error numbers are more, the present invention compares other algorithms and has higher robustness, with bigger performance advantage.

Claims (1)

1. 1 bit compression Bayes's cognitive method of a kind of strong robustness, it is characterised in that comprise the following steps that:
S1, perception matrix A with stochastical sampling property is constructed, sampling is carried out to signal and obtains y, step-up error preset value ε;
S2, the priori of construction parameters, Posterior distrbutionp:
T with regard to the Posterior distrbutionp of y is:Wherein, σ (yi(1+exp (- y)) is for)=1/ Logical function, and can lead,
The Gauss of x, w against Gamma priori is:
S3, structure object function, specially:
Variable δ between S31, introducing, according to Jaakkola-Jordon inequality
Wherein, z=(2t-1) y,
λ (δ)=(1/4 δ) tanh (δ/2), tanh (δ)=(exp (x)-exp (- x))/(exp (x)+exp (- x));
S32, structure alternative functions
S33, θ={ x, α, w, β } is made, build object function G (t, θ, δ)=F (t, x, w, δ) p (x | α) p (α) p (w | β) p (β);
S4, make q (θ)=qx(x)qα(α)qw(w)qβ(β), each parameter is updated using V-EM algorithm:
S41, renewal qx(x):
Wherein, Λα=diag (α1,...αn),
Λδ=diag (λ (δ1),...λ(δm)), then the mean variance of x is respectivelyΦx= (Λ<α>+2ATΛδA)-1, Λ<α>=diag (<α1>,...<αn>),<αi>For αiWith regard to being distributed qα(α) expectation;
S42, renewal qw(w):
Wherein, Λ<β>=diag (<β1>,...<βn>),<βi>For βiWith regard to being distributed qβ(β) expectation, therefore the mean variance of w be respectivelyΦw=(Λ<β>+2 Λδ)-1
S43, renewal qα(α):
Wherein,ForWith regard to being distributed qxThe expectation of (x), therefore α obeys following Gamma distribution:Wherein,Then αiExpectation For
S44, renewal qβ(β):
Wherein,ForWith regard to being distributed qwThe expectation of (w), therefore β obeys following Gamma distribution:WhereinThen αiExpectation For
S45、RightDerivation, can obtainMake above formula obtain for 0 Wherein,<xxT>=μxμx Tx
If the above-mentioned iterative process of S6 meets end conditionStop iteration, otherwise return S4 carries out next Secondary iteration.
CN201610914950.4A 2016-10-20 2016-10-20 A kind of 1 bit compression Bayes's cognitive method of strong robustness Pending CN106485014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610914950.4A CN106485014A (en) 2016-10-20 2016-10-20 A kind of 1 bit compression Bayes's cognitive method of strong robustness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610914950.4A CN106485014A (en) 2016-10-20 2016-10-20 A kind of 1 bit compression Bayes's cognitive method of strong robustness

Publications (1)

Publication Number Publication Date
CN106485014A true CN106485014A (en) 2017-03-08

Family

ID=58269788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610914950.4A Pending CN106485014A (en) 2016-10-20 2016-10-20 A kind of 1 bit compression Bayes's cognitive method of strong robustness

Country Status (1)

Country Link
CN (1) CN106485014A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089123A (en) * 2018-08-23 2018-12-25 江苏大学 Compressed sensing multi-description coding-decoding method based on the quantization of 1 bit vectors
CN111697974A (en) * 2020-06-19 2020-09-22 广东工业大学 Compressed sensing reconstruction method and device
CN113762069A (en) * 2021-07-23 2021-12-07 西安交通大学 Long sequence robust enhancement rapid trend filtering method under any noise

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089123A (en) * 2018-08-23 2018-12-25 江苏大学 Compressed sensing multi-description coding-decoding method based on the quantization of 1 bit vectors
CN109089123B (en) * 2018-08-23 2021-08-03 江苏大学 Compressed sensing multi-description coding and decoding method based on 1-bit vector quantization
CN111697974A (en) * 2020-06-19 2020-09-22 广东工业大学 Compressed sensing reconstruction method and device
CN111697974B (en) * 2020-06-19 2021-04-16 广东工业大学 Compressed sensing reconstruction method and device
CN113762069A (en) * 2021-07-23 2021-12-07 西安交通大学 Long sequence robust enhancement rapid trend filtering method under any noise
CN113762069B (en) * 2021-07-23 2022-12-09 西安交通大学 Long sequence robust enhancement rapid trend filtering method under any noise

Similar Documents

Publication Publication Date Title
Domke Learning graphical model parameters with approximate marginal inference
Aswani et al. Regression on manifolds: Estimation of the exterior derivative
CN106485014A (en) A kind of 1 bit compression Bayes&#39;s cognitive method of strong robustness
Zhang Partial linear models with general distortion measurement errors
CN114462613B (en) Quantum computer performance characterization method and device, electronic equipment and medium
ur Rehman et al. Translation invariant multi-scale signal denoising based on goodness-of-fit tests
CN104320144A (en) Sparseness self-adaptation signal reconstruction method
CN105531934B (en) The method and equipment used to perform the method for compression sensing for streaming data
Costa et al. Manifold learning using Euclidean k-nearest neighbor graphs [image processing examples]
CN114786018A (en) Image reconstruction method based on greedy random sparse Kaczmarz
Denecke et al. Robust estimators and tests for bivariate copulas based on likelihood depth
Kuchibhotla Deterministic inequalities for smooth m-estimators
Liu et al. A robust regression based on weighted LSSVM and penalized trimmed squares
CN114096970A (en) Measurement of Quantum State purity
US20220188682A1 (en) Readout-error mitigation for quantum expectation
CN105184832B (en) A method of improving the image reconstruction of Noise Variance Estimation
Su et al. On-line outlier and change point detection for time series
Burke Nonlinear optimization
CN110896308B (en) Single-tone signal reconstruction method
CN112422133A (en) Binary sparse signal recovery method for subtraction matching pursuit and application thereof
Poëtte et al. Iterative polynomial approximation adapting to arbitrary probability distribution
Park et al. Practically applicable central limit theorem for spatial statistics
Jing Quantized‐output‐based least squares of ARX systems
Lee From quotient-difference to generalized eigenvalues and sparse polynomial interpolation
Zhang et al. Improved locally linear embedding based method for nonlinear system fault detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170308

WD01 Invention patent application deemed withdrawn after publication