CN112528479A - Robust self-adaptive smoothing method based on Gibbs sampler - Google Patents
Robust self-adaptive smoothing method based on Gibbs sampler Download PDFInfo
- Publication number
- CN112528479A CN112528479A CN202011399115.4A CN202011399115A CN112528479A CN 112528479 A CN112528479 A CN 112528479A CN 202011399115 A CN202011399115 A CN 202011399115A CN 112528479 A CN112528479 A CN 112528479A
- Authority
- CN
- China
- Prior art keywords
- noise
- distribution
- sampling
- parameter
- observation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/10—Noise analysis or noise optimisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Complex Calculations (AREA)
Abstract
The invention belongs to the technical field of state estimation, and particularly relates to a robust self-adaptive smoothing method. A robust self-adaptive smoothing method based on a Gibbs sampler firstly models process noise and observation noise in a linear state space model into Student's t distribution, and noise parameters of the distribution are completely unknown. The process noise and observation noise satisfying the Student's t distribution are decomposed into a combination of Gauss and Gamma distributions by introducing auxiliary parameters. The unknown noise parameters are also regarded as random variables, and the prior distribution of the random variables is modeled as conjugate prior corresponding to the parameters. And under the framework of a Gibbs sampler, carrying out iterative sampling on unknown noise parameters, auxiliary parameters and the system state at the same time. After multiple iterations are performed, the average value of the iteration period state samples after the steady state is reached is selected as the final state estimation value. The robust adaptive smoothing method provided by the invention can still obtain a better state estimation result when the model noise is thick tail and the initial noise parameter setting error is larger.
Description
Technical Field
The invention belongs to the technical field of state estimation, and particularly relates to a robust self-adaptive smoothing method.
Background
The Rauch-Tung-striebel (rts) smoother, also known as Kalman smoother, is widely used in the field of linear state estimation. But the Kalman smoother can only guarantee its optimality when the process noise and the observation noise of the state space model are gaussian distributed and the noise parameters are completely known. However, in practical estimator applications, model noise parameters are often difficult to obtain accurately, and the sensor often has observation outliers. For example, in an underwater positioning system based on ranging, uncertainty parameters of underwater acoustic ranging are difficult to obtain accurately, and are affected by an underwater harsh communication environment, a ranging outlier often occurs, and the performance of the Kalman smoother is seriously deteriorated, and even the estimator may be diverged.
Some existing adaptive estimators can solve the problem of unknown noise parameters to some extent. The main adaptive estimation methods at present include a maximum likelihood method, a correlation method, a variance matching method and a Bayesian method. But these methods are all based on the gaussian distribution of model noise. The appearance of the outlier can make the original gaussian noise exhibit thick tail characteristics, that is, the probability density of the noise value in the region far from the mean value becomes large. Student's t distribution can better model thick tail noise, and therefore, is widely applied to the problem of state estimation with outliers. The current robust estimator based on Student's t distribution design mainly adopts the Variational Bayes (VB) approximation technology. A correlated VB approximation based robust adaptive estimator can simultaneously handle the thick tailness of noise and the unknown of noise parameters. However, the VB-approximation-based robust adaptive estimation method usually needs to perform a free factorization approximation of the joint probability density, and only an approximate solution of the target probability density distribution is obtained, so that the estimation accuracy is usually difficult to guarantee. The Markov Chain Monte Carlo (MCMC) method is a stochastic approximation method and is widely used in bayesian inference. MCMC can sample complex distributions by generating a Markov chain that is quasi-static with respect to the target distribution. When the number of samples is sufficiently large, the MCMC method can ensure that sufficiently accurate estimates are produced. The MCMC method has the potential to obtain better estimation accuracy than a method based on VB approximation when being used in the field of robust adaptive estimation, and no relevant research is found at present. Because of its simplicity and ease of use, the Gibbs sampler is the most widely used method in the MCMC framework.
Disclosure of Invention
The purpose of the invention is: aiming at the problems that the observation outlier of a sensor and the noise parameter of a model are unknown frequently in the actual linear state estimation application, the model noise is modeled into the Student's t distribution with unknown parameters, and a robust self-adaptive smoothing method based on a Gibbs sampler is designed, so that the thick tailability of the model noise and the unknown noise parameter can be processed simultaneously.
The technical scheme of the invention is as follows: the process noise and observation noise in the linear state space model are first modeled as Student's t distributions, and their noise parameters are completely unknown. The process noise and observation noise satisfying the Student's t distribution are decomposed into a combination of Gauss and Gamma distributions by introducing auxiliary parameters. The unknown noise parameters are also regarded as random variables, and the prior distribution of the random variables is modeled as conjugate prior corresponding to the parameters. And under the framework of a Gibbs sampler, carrying out iterative sampling on unknown noise parameters, auxiliary parameters and the system state at the same time. The flow of each iteration cycle is as follows: 1) calculating the state posterior distribution of the current iteration cycle based on a Kalman smoother under the condition of the noise parameter and the auxiliary parameter sampled in the previous iteration cycle, and sampling the system state of the current iteration cycle from the posterior distribution; 2) calculating posterior distribution of an observation noise mean value and a scale matrix under the condition of a system state, an observation variable and an observation noise auxiliary parameter sampled in the current iteration period, and sampling the observation noise mean value and the scale matrix in the current iteration period from the posterior distribution; 3) calculating posterior distribution of a process noise mean value and a scale matrix by taking a system state and a process noise auxiliary parameter sampled in a current iteration period as conditions, and sampling the process noise mean value and the scale matrix of the current iteration period from the posterior distribution; 4) calculating posterior distribution of the observation noise auxiliary parameters under the condition of the system state, the observation variable and the observation noise parameters sampled in the current iteration period, and sampling the observation noise auxiliary parameters in the current iteration period from the posterior distribution; 5) calculating posterior distribution of the process noise auxiliary parameters by taking the system state and the process noise parameters sampled in the current iteration period as conditions, and sampling the process noise auxiliary parameters in the current iteration period from the posterior distribution; 6) calculating posterior distribution of the observation noise freedom degree under the condition of observation noise auxiliary parameters sampled in the current iteration period, and sampling the observation noise freedom degree of the current iteration period from the posterior distribution; 7) and calculating posterior distribution of the process noise freedom degree by taking the process noise auxiliary parameters sampled in the current iteration period as conditions, and sampling the process noise freedom degree of the current iteration period from the posterior distribution. After multiple iterations are performed, the average value of the iteration period state samples after the steady state is reached is selected as the final state estimation value.
The invention comprises the following steps:
A. modeling the process noise and the observation noise of the linear state space model into Student's t distribution, wherein the parameters of the process noise and the observation noise are random variables, and the prior distribution of the parameters of the process noise and the observation noise is the corresponding conjugate prior distribution.
B. And introducing auxiliary parameters, and decomposing the process noise and the observation noise which meet the Student's t distribution into a combination of Gauss distribution and Gamma distribution by using a heuristic Gauss model.
C. And initially sampling process noise, observation noise parameters and auxiliary parameters of the linear state space model under the frame of a Gibbs sampler.
D. Setting a total number of iteration cycles N and a number of stationary cycles NbIs required to satisfy Nb< N; and performing N iterations, wherein the flow of each iteration is as follows:
D1. and calculating the state posterior distribution of the current iteration period based on a Kalman smoother under the condition of the noise parameter and the auxiliary parameter sampled in the previous iteration period, and sampling the system state of the current iteration period from the posterior distribution.
D2. And calculating posterior distribution of the observation noise mean value and the scale matrix under the condition of the system state, the observation variable and the observation noise auxiliary parameter sampled in the current iteration period, and sampling the observation noise mean value and the scale matrix in the current iteration period from the posterior distribution.
D3. And calculating posterior distribution of the process noise mean value and the scale matrix by taking the system state and the process noise auxiliary parameters sampled in the current iteration period as conditions, and sampling the process noise mean value and the scale matrix in the current iteration period from the posterior distribution.
D4. And calculating posterior distribution of the observation noise auxiliary parameters under the condition of the system state, the observation variable and the observation noise parameters sampled in the current iteration period, and sampling the observation noise auxiliary parameters in the current iteration period from the posterior distribution.
D5. And calculating posterior distribution of the process noise auxiliary parameters by taking the system state and the process noise parameters sampled in the current iteration period as conditions, and sampling the process noise auxiliary parameters in the current iteration period from the posterior distribution.
D6. And calculating posterior distribution of the observation noise freedom degree under the condition of the observation noise auxiliary parameters sampled in the current iteration period, and sampling the observation noise freedom degree of the current iteration period from the posterior distribution.
D7. And calculating posterior distribution of the process noise freedom degree by taking the process noise auxiliary parameters sampled in the current iteration period as conditions, and sampling the process noise freedom degree of the current iteration period from the posterior distribution.
E. After N iterations, selecting N-N after reaching steady statebAnd taking the average value of the state samples in the secondary iteration period as a final state estimation value.
On the basis of the above scheme, specifically, the method adopted in step a is as follows:
for a linear state space model:
xk=Fk-1xk-1+wk-1
zk=Hkxk+vk
wherein:is a vector of the states of the system,in order to observe the vector for the system,in order to be a state transition matrix,in order for the system to observe the matrix,in order to be a noise of the process,to observe noise; w is akAnd vkBoth modeled as Student's t distributions, as follows:
p(wk)=St(wk;ξ,Q,ω)
p(vk)=St(vk;μ,R,v)
wherein: st (x; mu, sigma, omega) represents a random vector x which satisfies Student's t distribution and takes mu as a mean vector, sigma as a scale matrix and omega as a degree of freedom; xi, Q, omega, mu, R and v are random variables, and the prior distribution of the random variables is modeled as corresponding conjugate prior; wherein, the prior distribution of mu and R is modeled as Gaussian-Inverse Wishart (GIW) distribution:
wherein: GIW (X, X; a, B, c, D) represents a Gaussian-inverse Wishart distribution with a, B, c, D as parameters; IW (X; a, B) represents that the random matrix X meets inverse Wisharp distribution with a as the degree of freedom and B as the inverse scale matrix; n (x; a, A) represents a Gaussian distribution in which the random vector x satisfies a as a mean and A as a variance; likewise, the prior distribution of ξ and Q is also modeled as a GIW distribution:
the degree of freedom parameters v and ω are both modeled as a Gamma distribution, as follows:
p(v)=Gamma(v;e0,f0)
p(ω)=Gamma(ω;g0,h0)
wherein: gamma (x; a, b) represents a random variable x satisfying the Gamma distribution with a as a shape parameter and b as a rate parameter; setting nominal values of mean vectors of process noise and observation noise to be xi respectively0And mu0(ii) a Order:
E(ξ)=ξ0
E(μ)=μ0
wherein: e (-) is the expectation of a random variable; to obtain:
α0and beta0Is a modulation parameter; setting nominal values of process noise and observation noise scale matrix to be Q respectively0And R0(ii) a Order:
E(Q)=Q0
E(R)=R0
to obtain:
t0=ρt+2n+2
T0=ρtQ0
u0=ρu+2m+2
U0=ρuR0
wherein: rhouAnd rhotIs a modulation parameter;
setting nominal values of process noise and observation noise freedom as omega0And v0(ii) a Order:
E(ω)=ω0
E(v)=v0
to obtain:
on the basis of the above scheme, specifically, the method adopted in step B is:
introducing an auxiliary parameter gammakAnd lambdakThe process noise and the observation noise are decomposed into:
on the basis of the above scheme, specifically, the method adopted in step C is:
according to the process noise scale matrix and the observation noise scale matrix prior distribution parameter t obtained in the step A0,T0,u0,U0Directly from IW (Q; t)0,T0),IW(R;u0,U0) Obtaining initial sampling of a process noise scale matrix and an observation noise scale matrix by intermediate sampling; namely:
Q(1)~IW(Q;t0,T0)
R(1)~IW(R;u0,U0)
wherein: a. the(j)Representing the sampling value of the random variable A in the jth iteration;
the prior distribution parameter alpha of the process noise mean vector and the observation noise mean vector obtained according to the step A0,β0,And slave IW (Q; t)0,T0),IW(R;u0,U0) Q of middle sampling(1)And R(1)Directly fromAndobtaining a process noise mean vector and an observation noise mean vector initial sample by intermediate sampling; namely:
according to the process noise degree of freedom and observation noise degree of freedom prior distribution parameter e obtained in the step A0,f0,g0,h0Directly from Gamma (omega; g)0,h0) And Gamma (v; e.g. of the type0,f0) Obtaining initial sampling of process noise freedom and observation noise freedom through intermediate sampling; namely:
ω(1)~Gamma(ω;g0,h0)
v(1)~Gamma(v;e0,f0)
according to the process noise auxiliary parameter and observation noise auxiliary parameter prior distribution parameter obtained in the step B, directly obtaining the noise auxiliary parameterAndobtaining initial sampling of the process noise auxiliary parameter and the observation noise auxiliary parameter by intermediate sampling; namely:
on the basis of the above scheme, specifically, the method adopted in the step D1 is as follows:
noise mean vector sample value xi during previous iteration cycle(i)Scale matrix sample value Q(i)And auxiliary parameter sampling value gammak (i)Observation of noise mean vector sample value mu(i)Scale matrix sample value R(i)And an auxiliary parameter lambdak (i)Under the known condition, calculating a state posterior distribution parameter by adopting a Kalman smoother;
1) initializing system state and variance:
wherein: the subscript i | j represents the estimate of the variable at time i conditioned on the system observations prior to time j;
2) forward recursion:
let T be the total number of times, for k 1, 2, 3.
a) Prediction
Calculating the prior state and the prior variance at the k moment according to the posterior state and the posterior variance at the k-1 moment as follows:
Pk|k-1=Fk-1Pk-1|k-1Fk-1 T+Q/γk
b) updating
Calculating the posterior state and posterior variance at the moment k as follows:
Kk=Pk|k-1Hk T(HkPk|k-1Hk T+R/λk)
Pk|k=Pk|k-1-KkHkPk|k-1
wherein: kkIs Kalman gain;
3) recursion backwards
For k ═ T-1, T-2.. 0, the following steps are performed:
Pk|T=Pk|k+Gk(Pk+1|T-Pk+1|k)Gk T
wherein: gkIs the smoothing gain;
according to the above Kalman smoother, xi is obtained(i),Q(i),γ1:T (i),μ(i),R(i)And lambda1:T (i)A state posterior parameter of a conditionAndfurther, for k equal to 0, 1, 2, 3.. T, the following are performed in order:
sampling is carried out;
the method adopted in the step D2 is as follows:
according to the Bayes rule:
p(μ,R|z1:T,x0:T,λ1:T)=cp(μ,R)p(z1:T|μ,R,x0:T,λ1:T)
its logarithmic form is:
log p(μ,R|z1:T,x0:T,λ1:T)=log p(μ,R)+log p(z1:T|μ,R,x0:T,λ1:T)+log c
as shown in the step a, the prior distribution of the observation noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the gaussian-inverse Wishart distribution is as follows:
the log form of the likelihood distribution can be written according to the observation model:
and calculating to obtain:
wherein:
the posterior distribution of the mean vector of the observed noise is sufficientIs an average value ofThe posterior probability density of the scale matrix is based on the Gaussian distribution of the varianceFor the degree of freedom, toThe inverse Wishart distribution of the inverse scale matrix is obtained; in the actual sampling process, the following steps are required:
sample R(i+1)And further according to:
sampling mu(i+1)Obtaining an observation noise mean vector and a scale matrix sampling value of the current iteration cycle;
the method adopted in the step D3 is as follows:
according to the Bayes rule:
p(ξ,Q|x0:T)=cp(ξ,Q)p(x0:T|ξ,Q)
its logarithmic form is:
log p(ξ,Q|x0:T)=log p(ξ,Q)+log p(x0:T|ξ,Q)+log c
as shown in the step a, the prior distribution of the process noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the prior distribution is as follows:
the log form of the likelihood distribution is written according to a kinematic model:
and calculating to obtain:
wherein:
process noise mean satisfied withIs an average value ofThe posterior probability density of the scale matrix is based on the Gaussian distribution of the varianceFor the degree of freedom, toThe inverse Wishart distribution of the inverse scale matrix is obtained; in the actual sampling process, the following steps are required:
sample Q(i+1)And further according to:
sample xi(i+1)Obtaining a process noise mean vector and a scale matrix sampling value of the current iteration cycle;
the method adopted in the step D4 is as follows:
according to the Bayes rule:
p(λk|xk,zk,μ,R,v)=cp(λk|v)p(zk|xk,λk,μ,R)
its logarithmic form is:
log p(λk|xk,zk,μ,R,v)=log p(λk|v)+log p(zk|xk,λk,μ,R)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
the posterior distribution of the noise auxiliary parameter is observed sufficientlyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling lambdak (i+1)Obtaining an observation noise auxiliary parameter sampling value of the current iteration period, wherein k is 1, 2,. T;
the method adopted in the step D5 is as follows:
according to the Bayes rule:
p(γk-1|xk,xk-1,ξ,Q,ω)=cp(γk-1|ω)p(xk|xk-1,γk-1,ξ,Q)
its logarithmic form is:
log p(γk-1|xk,xk-1,ξ,Q,ω)=log p(γk-1|ω)+log p(xk|xk-1,γk-1,ξ,Q)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
process noise auxiliary parameter posterior distribution satisfied byAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling gammak-1 (i+1),k=1,Obtaining a process noise auxiliary parameter sampling value of the current iteration period;
the method adopted in the step D6 is as follows:
according to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(v)=(e0-1)log v-f0v+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
observing the posterior distribution of the degree of freedom of the noise to satisfyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sample v(i+1)Obtaining an observation noise freedom degree sampling value of the current iteration period;
the method adopted in the step D7 is as follows:
according to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(ω)=(g0-1)log ω-h0ω+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
process noise degree of freedom posterior distribution satisfied byAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling omega(i+1)And obtaining a process noise freedom degree sampling value of the current iteration period.
On the basis of the above scheme, specifically, the method adopted in step E is:
after N times of iterative sampling, a state variable sampling set is obtained asSelecting N-N after reaching steady statebAnd taking the average value of the state samples of the sub-iteration period as a final state estimation value, namely:
has the advantages that: the invention processes the observation outlier of the sensor by modeling the model noise of the linear state space model as Student's t distribution, and processes the problem that the model noise parameter is unknown by taking the noise parameter as a random variable and randomly sampling the noise parameter and the system state simultaneously under the frame of a Gibbs sampler. The method can still obtain a better state estimation result when the sensor observation field value exists and the initial noise parameter setting error is larger.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a comparison of the position estimation root mean square error for the four methods of the present invention, i.e., GIB-RSTS versus KS, GIB-KS, and VB-KS;
FIG. 3 is a comparison of the root mean square error of velocity estimates for the four methods of the present invention, i.e., GIB-RSTS versus KS, GIB-KS, and VB-KS.
Detailed Description
Embodiment 1, referring to fig. 1, a robust adaptive smoothing method based on a Gibbs sampler includes the following steps:
A. modeling the process noise and the observation noise of the linear state space model into Student's t distribution, wherein the parameters of the process noise and the observation noise are random variables, and the prior distribution of the parameters of the process noise and the observation noise is the corresponding conjugate prior distribution.
For a linear state space model:
xk=Fk-1xk-1+Wk-1
zk=Hkxk+vk
wherein:is a vector of the states of the system,in order to observe the vector for the system,in order to be a state transition matrix,in order for the system to observe the matrix,in order to be a noise of the process,to observe noise; w is akAnd vkAre all modeled as Student's t, the distribution is as follows:
p(wk)=St(wk;ξ,Q,ω)
p(vk)=St(vk;μ,R,v)
wherein: st (x; mu, sigma, omega) represents a random vector x which satisfies Student's t distribution and takes mu as a mean vector, sigma as a scale matrix and omega as a degree of freedom; xi, Q, omega, mu, R and v are random variables, and the prior distribution of the random variables is modeled as corresponding conjugate prior; wherein, the prior distribution of mu and R is modeled as Gaussian-Inverse Wishart (GIW) distribution:
wherein: GIW (X, X; a, B, c, D) represents a Gaussian-inverse Wishart distribution with a, B, c, D as parameters; IW (X; a, B) represents that the random matrix X meets inverse Wisharp distribution with a as the degree of freedom and B as the inverse scale matrix; n (x; a, A) represents a Gaussian distribution in which the random vector x satisfies a as a mean and A as a variance; likewise, the prior distribution of ξ and Q is also modeled as a GIW distribution:
the degree of freedom parameters v and ω are both modeled as a Gamma distribution, as follows:
p(v)=Gamma(v;e0,f0)
p(ω)=Gamma(ω;g0,h0)
wherein: gamma (x; a, b) represents a random variable x satisfying the Gamma distribution with a as a shape parameter and b as a rate parameter; setting nominal values of mean vectors of process noise and observation noise to be xi respectively0And mu0(ii) a Order:
E(ξ)=ξ0
E(μ)=μ0
wherein: e (-) is the expectation of a random variable; to obtain:
α0and beta0Is a modulation parameter; setting nominal values of process noise and observation noise scale matrix to be Q respectively0And R0(ii) a Order:
E(Q)=Q0
E(R)=R0
to obtain:
t0=ρt+2n+2
T0=ρtQ0
u0=ρu+2m+2
U0=ρuR0
wherein: rhouAnd rhotIs a modulation parameter;
setting nominal values of process noise and observation noise freedom as omega0And v0(ii) a Order:
E(ω)=ω0
E(v)=v0
to obtain:
B. and introducing auxiliary parameters, and decomposing the process noise and the observation noise which meet the Student's t distribution into a combination of Gauss distribution and Gamma distribution by using a heuristic Gauss model.
Introducing an auxiliary parameter gammakAnd lambdakThe process noise and the observation noise are decomposed into:
C. and initially sampling process noise, observation noise parameters and auxiliary parameters of the linear state space model under the frame of a Gibbs sampler.
According to the process noise scale matrix and the observation noise scale matrix prior distribution parameter t obtained in the step A0,T0,u0,U0Directly from IW (Q; t)0,T0),IW(R;u0,U0) Obtaining initial sampling of a process noise scale matrix and an observation noise scale matrix by intermediate sampling; namely:
Q(1)~IW(Q;t0,T0)
R(1)~IW(R;u0,U0)
wherein: a. the(j)Representing the sampling value of the random variable A in the jth iteration;
the prior distribution parameter alpha of the process noise mean vector and the observation noise mean vector obtained according to the step A0,β0,And slave IW (Q; t)0,T0),IW(R;u0,U0) Q of middle sampling(1)And R(1)Directly fromAndobtaining a process noise mean vector and an observation noise mean vector initial sample by intermediate sampling; namely:
according to the process noise degree of freedom and observation noise degree of freedom prior distribution parameter e obtained in the step A0,f0,g0,h0Directly from Gamma (omega; g)0,h0) And Gamma (v; e.g. of the type0,f0) Obtaining initial sampling of process noise freedom and observation noise freedom through intermediate sampling; namely:
ω(1)~Gamma(ω;g0,h0)
v(1)~Gamma(v;e0,f0)
according to the process noise auxiliary parameter and observation noise auxiliary parameter prior distribution parameter obtained in the step B, directly obtaining the noise auxiliary parameterAndobtaining initial sampling of the process noise auxiliary parameter and the observation noise auxiliary parameter by intermediate sampling; namely:
D. setting a total number of iteration cycles N and a number of stationary cycles NbIs required to satisfy Nb< N; and performing N iterations, wherein the flow of each iteration is as follows:
D1. and calculating the state posterior distribution of the current iteration period based on a Kalman smoother under the condition of the noise parameter and the auxiliary parameter sampled in the previous iteration period, and sampling the system state of the current iteration period from the posterior distribution.
Noise mean vector sample value xi during previous iteration cycle(i)Scale matrix sample value Q(i)And auxiliary parameter sampling value gammak (i)Observation of noise mean vector sample value mu(i)Scale matrix sample value R(i)And an auxiliary parameter lambdak (i)Under the known condition, calculating a state posterior distribution parameter by adopting a Kalman smoother;
1) initializing system state and variance:
wherein: the subscript i | j represents the estimate of the variable at time i conditioned on the system observations prior to time j;
2) forward recursion:
let T be the total number of times, for k 1, 2, 3.
a) Prediction
Calculating the prior state and the prior variance at the k moment according to the posterior state and the posterior variance at the k-1 moment as follows:
Pk|k-1=Fk-1Pk-1|k-1Fk-1 T+Q/γk
b) updating
Calculating the posterior state and posterior variance at the moment k as follows:
Kk=Pk|k-1Hk T(HkPk|k-1Hk T+R/λk)
Pk|k=Pk|k-1-KkHkPk|k-1
wherein: kkIs Kalman gain;
3) recursion backwards
For k ═ T-1, T-2.. 0, the following steps are performed:
Pk|T=Pk|k+Gk(Pk+1|T-Pk+1|k)Gk T
wherein: gkIs the smoothing gain;
according to the above Kalman smoother, xi is obtained(i),Q(i),γ1:T (i),μ(i),R(i)And lambda1:T (i)A state posterior parameter of a conditionAndfurther, for k equal to 0, 1, 2, 3.. T, the following are performed in order:
sampling is performed.
D2. And calculating posterior distribution of the observation noise mean value and the scale matrix under the condition of the system state, the observation variable and the observation noise auxiliary parameter sampled in the current iteration period, and sampling the observation noise mean value and the scale matrix in the current iteration period from the posterior distribution.
According to the Bayes rule:
p(μ,R|z1:T,x0:T,λ1:T)=cp(μ,R)p(z1:T|μ,R,x0:T,λ1:T)
its logarithmic form is:
log p(μ,R|z1:T,x0:T,λ1:T)=log p(μ,R)+log p(z1:T|μ,R,x0:T,λ1:T)+log c
as shown in the step a, the prior distribution of the observation noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the gaussian-inverse Wishart distribution is as follows:
the log form of the likelihood distribution can be written according to the observation model:
and calculating to obtain:
wherein:
the posterior distribution of the mean vector of the observed noise is sufficientIs an average value ofThe posterior probability density of the scale matrix is based on the Gaussian distribution of the varianceFor the degree of freedom, toThe inverse Wishart distribution of the inverse scale matrix is obtained; in the actual sampling process, the following steps are required:
sample R(i+1)And further according to:
sampling mu(i+1)And obtaining an observation noise mean vector and a scale matrix sampling value of the current iteration period.
D3. And calculating posterior distribution of the process noise mean value and the scale matrix by taking the system state and the process noise auxiliary parameters sampled in the current iteration period as conditions, and sampling the process noise mean value and the scale matrix in the current iteration period from the posterior distribution.
According to the Bayes rule:
p(ξ,Q|x0:T)=cp(ξ,Q)p(x0:T|ξ,Q)
its logarithmic form is:
log p(ξ,Q|x0:T)=log p(ξ,Q)+log p(x0:T|ξ,Q)+log c
as shown in the step a, the prior distribution of the process noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the prior distribution is as follows:
the log form of the likelihood distribution is written according to a kinematic model:
and calculating to obtain:
wherein:
process noise mean satisfied withIs an average value ofIs a Gaussian distribution of variance, a scale matrixHas a posterior probability density ofFor the degree of freedom, toThe inverse Wishart distribution of the inverse scale matrix is obtained; in the actual sampling process, the following steps are required:
sample Q(i+1)And further according to:
sample xi(i+1)And obtaining a process noise mean vector and a scale matrix sampling value of the current iteration cycle.
D4. And calculating posterior distribution of the observation noise auxiliary parameters under the condition of the system state, the observation variable and the observation noise parameters sampled in the current iteration period, and sampling the observation noise auxiliary parameters in the current iteration period from the posterior distribution.
According to the Bayes rule:
p(λk|xk,zk,μ,R,v)=cp(λk|v)p(zk|xk,λk,μ,R)
its logarithmic form is:
log p(λk|xk,zk,μ,R,v)=log p(λk|v)+log p(zk|xk,λk,μ,R)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
the posterior distribution of the noise auxiliary parameter is observed sufficientlyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling lambdak (i+1)And k is 1, 2,. T, and the sampling value of the observation noise auxiliary parameter of the current iteration period is obtained.
D5. And calculating posterior distribution of the process noise auxiliary parameters by taking the system state and the process noise parameters sampled in the current iteration period as conditions, and sampling the process noise auxiliary parameters in the current iteration period from the posterior distribution.
According to the Bayes rule:
p(γk-1|xk,xk-1,ξ,Q,ω)=cp(γk-1|ω)p(xk|xk-1,γk-1,ξ,Q)
its logarithmic form is:
log p(γk-1|xk,xk-1,ξ,Q,ω)=log p(γk-1|ω)+log p(xk|xk-1,γk-1,ξ,Q)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
process noise auxiliary parameter posterior distribution satisfied byAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling gammak-1 (i+1)And k is 1, 2.. T, and a process noise auxiliary parameter sampling value of the current iteration cycle is obtained.
D6. And calculating posterior distribution of the observation noise freedom degree under the condition of the observation noise auxiliary parameters sampled in the current iteration period, and sampling the observation noise freedom degree of the current iteration period from the posterior distribution.
According to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(v)=(e0-1)log v-f0v+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
observing the posterior distribution of the degree of freedom of the noise to satisfyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sample v(i+1)And obtaining an observation noise freedom degree sampling value of the current iteration period.
D7. And calculating posterior distribution of the process noise freedom degree by taking the process noise auxiliary parameters sampled in the current iteration period as conditions, and sampling the process noise freedom degree of the current iteration period from the posterior distribution.
According to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(ω)=(g0-1)log ω-h0ω+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
process noise degree of freedom posterior distribution satisfied byAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling omega(i+1)And obtaining a process noise freedom degree sampling value of the current iteration period.
E. After N iterations, selecting N-N after reaching steady statebAnd taking the average value of the state samples in the secondary iteration period as a final state estimation value.
After N times of iterative sampling, a state variable sampling set is obtained asSelecting N-N after reaching steady statebAnd taking the average value of the state samples of the sub-iteration period as a final state estimation value, namely:
embodiment 2, the pseudo code implemented by the present invention is:
embodiment 3, verification is performed by simulation data using the robust adaptive smoothing method described in embodiment 1.
The simulation case is two-dimensional target tracking and defines a state vector xk=[xk yk vx,k vy,k]TWherein x isk,yk,vx,kAnd vy,kRespectively, the x and y direction positions and the x and y direction velocities. The observed variables are the positions in the x and y directions. The system kinematics model and the observation model are respectively as follows:
wherein: Δ t is a discrete time interval, chosen to be 1 second in this simulation. The total simulation time T was chosen to be 200 seconds. To simulate the observed outliers of the sensor, we use:
to generate true process noise and observation noise, wherein:
the nominal process noise scale matrix and the observation noise scale matrix are respectively set to be Qn=5Qt,Rn=Rt/5. The nominal process noise and the observed noise mean vector are both set to zero vectors. The simulated modulation parameters are set as follows: rhot=1,ρu=1,α0=0.1,β0=0.1,e0=5,f0=1,g0=5,h01, total iteration cycle N5000, stationary cycle Nb1000. By way of comparison, the present embodiment simultaneously shows the estimation performance of the Kalman smoother estimation result (KS) solved by the nominal noise parameters, the adaptive Kalman smoother (gibb-KS) based on the Gibbs sampler and the noise gaussian distribution, and the robust adaptive Kalman smoother estimation result (VB-RSTS) based on the variational bayesian approximation. The method of the present invention is abbreviated as "GIB-RSTS".
The results of 500 independent sub-Monte Carlo simulations were used to verify the proposed method. The Root Mean Square Error (RMSE) of position and velocity and the Average Root Mean Square Error (ARMSE) are used to evaluate the performance of the different estimators. Several evaluation indices were calculated as follows:
wherein:andthe real coordinate of the aircraft at the moment k in the ith Monte Carlo simulation;andposition coordinates of the aircraft estimated at the moment k in the ith Monte Carlo simulation;andthe real speed of the aircraft at the moment k in the ith Monte Carlo simulation;andthe estimated speed of the aircraft at the moment k in the ith Monte Carlo simulation. M-500 indicates the total number of simulations.
FIG. 2 is a comparison of position estimation RMS errors for four methods, while FIG. 3 is a comparison of velocity RMS errors for four methods. The mean root mean square error for the four methods is shown in table 1.
TABLE 1
From fig. 2, fig. 3 and table 1, it can be seen that the GIB-RSTS proposed by the present invention can obtain better state estimation accuracy, including position estimation and velocity estimation, than KS, GIB-KS and VB-KS. KS does not consider noise parameter setting error and observation field value, so that the state estimation error is larger. The method provided by the invention is superior to GIB-KS, mainly because GIB-KS only considers the unknown of noise parameters and ignores the influence of the thick tail of noise. The main reason why the method of the invention is superior to VB-RSTS is that VB-RSTS is based on the free factorization approximation of posterior distribution and has certain approximation error. Generally speaking, the proposed GIB-RSTS can still obtain a more ideal state estimation result under the condition of large field value and noise parameter setting errors, and has better practical application potential.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.
Claims (6)
1. A robust adaptive smoothing method based on a Gibbs sampler is characterized by comprising the following steps:
A. modeling process noise and observation noise of a linear state space model into Student's t distribution, wherein parameters of the process noise and the observation noise are random variables, and the prior distribution of the parameters of the process noise and the observation noise is the corresponding conjugate prior distribution;
B. introducing auxiliary parameters, and decomposing process noise and observation noise meeting the Student's t distribution into a combination of Gauss distribution and Gamma distribution by using a heuristic Gauss model;
C. initially sampling process noise, observation noise parameters and auxiliary parameters of a linear state space model under the frame of a Gibbs sampler;
D. setting a total number of iteration cycles N and a number of stationary cycles NbIs required to satisfy Nb< N; and performing N iterations, wherein the flow of each iteration is as follows:
D1. calculating the state posterior distribution of the current iteration cycle based on a Kalman smoother under the condition of the noise parameter and the auxiliary parameter sampled in the previous iteration cycle, and sampling the system state of the current iteration cycle from the posterior distribution;
D2. calculating posterior distribution of an observation noise mean value and a scale matrix under the condition of a system state, an observation variable and an observation noise auxiliary parameter sampled in the current iteration period, and sampling the observation noise mean value and the scale matrix in the current iteration period from the posterior distribution;
D3. calculating posterior distribution of a process noise mean value and a scale matrix by taking a system state and a process noise auxiliary parameter sampled in a current iteration period as conditions, and sampling the process noise mean value and the scale matrix of the current iteration period from the posterior distribution;
D4. calculating posterior distribution of the observation noise auxiliary parameters under the condition of the system state, the observation variable and the observation noise parameters sampled in the current iteration period, and sampling the observation noise auxiliary parameters in the current iteration period from the posterior distribution;
D5. calculating posterior distribution of the process noise auxiliary parameters by taking the system state and the process noise parameters sampled in the current iteration period as conditions, and sampling the process noise auxiliary parameters in the current iteration period from the posterior distribution;
D6. calculating posterior distribution of the observation noise freedom degree under the condition of observation noise auxiliary parameters sampled in the current iteration period, and sampling the observation noise freedom degree of the current iteration period from the posterior distribution;
D7. calculating posterior distribution of the process noise freedom degree under the condition of the process noise auxiliary parameters sampled in the current iteration period, and sampling the process noise freedom degree of the current iteration period from the posterior distribution;
E. after N iterations, selecting N-N after reaching steady statebAnd taking the average value of the state samples in the secondary iteration period as a final state estimation value.
2. The method of claim 1, wherein the step a comprises the following steps:
for a linear state space model:
xk=Fk-1xk-1+wk-1
zk=Hkxk+vk
wherein:is a vector of the states of the system,in order to observe the vector for the system,in order to be a state transition matrix,in order for the system to observe the matrix,in order to be a noise of the process,to observe noise; w is akAnd vkBoth modeled as Student's t distributions, as follows:
p(wk)=St(wk;ξ,Q,ω)
p(vk)=St(vk;μ,R,v)
wherein: st (x; mu, sigma, omega) represents a random vector x which satisfies Student's t distribution and takes mu as a mean vector, sigma as a scale matrix and omega as a degree of freedom; xi, Q, omega, mu, R and v are random variables, and the prior distribution of the random variables is modeled as corresponding conjugate prior; wherein, the prior distribution of mu and R is modeled as Gaussian-Inverse Wishart (GIW) distribution:
wherein: GIW (X, X; a, B, c, D) represents a Gaussian-inverse Wishart distribution with a, B, c, D as parameters; IW (X; a, B) represents that the random matrix X meets inverse Wisharp distribution with a as the degree of freedom and B as the inverse scale matrix; n (x; a, A) represents a Gaussian distribution in which the random vector x satisfies a as a mean and A as a variance; likewise, the prior distribution of ξ and Q is also modeled as a GIW distribution:
the degree of freedom parameters v and ω are both modeled as a Gamma distribution, as follows:
p(v)=Gamma(v;e0,f0)
p(ω)=Gamma(ω;g0,h0)
wherein: gamma (x; a, b) represents a random variable x satisfying the Gamma distribution with a as a shape parameter and b as a rate parameter; setting nominal values of mean vectors of process noise and observation noise to be xi respectively0And mu0(ii) a Order:
E(ξ)=ξ0
E(μ)=μ0
wherein: e (-) is the expectation of a random variable; to obtain:
α0and beta0Is a modulation parameter; setting nominal values of process noise and observation noise scale matrix to be Q respectively0And R0(ii) a Order:
E(Q)=Q0
E(R)=R0
to obtain:
t0=ρt+2n+2
T0=ρtQ0
u0=ρu+2m+2
U0=ρuR0
wherein: rhouAnd rhotIs a modulation parameter;
setting nominal values of process noise and observation noise freedom as omega0And v0(ii) a Order:
E(ω)=ω0
E(v)=v0
to obtain:
4. the Gibbs sampler-based robust adaptive smoothing method as claimed in claim 3, wherein the step C comprises:
according to the process noise scale matrix and the observation noise scale matrix prior distribution parameter t obtained in the step A0,T0,u0,U0Directly from IW (Q; t)0,T0),IW(R;u0,U0) Obtaining initial sampling of a process noise scale matrix and an observation noise scale matrix by intermediate sampling; namely:
Q(1)~IW(Q;t0,T0)
R(1)~IW(R;u0,U0)
wherein: a. the(j)Representing the sampling value of the random variable A in the jth iteration;
the prior distribution parameter alpha of the process noise mean vector and the observation noise mean vector obtained according to the step A0,β0,And slave IW (Q; t)0,T0),IW(R;u0,U0) Q of middle sampling(1)And R(1)Directly fromAndobtaining a process noise mean vector and an observation noise mean vector initial sample by intermediate sampling; namely:
obtaining the prior distribution parameter e of the process noise freedom degree and the observation noise freedom degree of the mountain according to the step A0,f0,g0,h0Directly from Gamma (omega; g)0,h0) And Gamma (v; e.g. of the type0,f0) Obtaining initial sampling of process noise freedom and observation noise freedom through intermediate sampling; namely:
ω(1)~Gamma(ω;g0,h0)
v(1)~Gamma(v;e0,f0)
the process noise auxiliary parameter obtained according to the step BAnd observing the noise-aided parameter prior distribution parameter directly fromAndobtaining initial sampling of the process noise auxiliary parameter and the observation noise auxiliary parameter by intermediate sampling; namely:
5. the Gibbs sampler-based robust adaptive smoothing method as claimed in claim 4, wherein the step D1 is implemented by:
noise mean vector sample value xi during previous iteration cycle(i)Scale matrix sample value Q(i)And auxiliary parameter sampling value gammak (i)Observation of noise mean vector sample value mu(i)Scale matrix sample value R(i)And an auxiliary parameter lambdak (i)Under the known condition, calculating a state posterior distribution parameter by adopting a Kalman smoother;
1) initializing system state and variance:
wherein: the subscript i | j represents the estimate of the variable at time i conditioned on the system observations prior to time j;
2) forward recursion:
let T be the total number of times, for k 1, 2, 3.
a) Prediction
Calculating the prior state and the prior variance at the k moment according to the posterior state and the posterior variance at the k-1 moment as follows:
Pk|k-1=Fk-1Pk-1|k-1Fk-1 T+Q/γk
b) updating
Calculating the posterior state and posterior variance at the moment k as follows:
Kk=Pk|k-1Hk T(HkPk|k-1Hk T+R/λk)
Pk|k=Pk|k-1-KkHkPk|k-1
wherein: kkIs Kalman gain;
3) recursion backwards
For k ═ T-1, T-2.. 0, the following steps are performed:
Pk|T=Pk|k+Gk(Pk+1|T-Pk+1|k)Gk T
wherein: gkIs the smoothing gain;
according to the above Kalman smoother, xi is obtained(i),Q(i),γ1:T (i),μ(i),R(i)And lambda1:T (i)A state posterior parameter of a conditionAndfurther, for k equal to 0, 1, 2, 3.. T, the following are performed in order:
sampling is carried out;
the method adopted in the step D2 is as follows:
according to the Bayes rule:
p(μ,R|z1:T,x0:T,λ1:T)=cp(μ,R)p(z1:T|μ,R,x0:T,λ1:T)
its logarithmic form is:
log p(μ,R|z1:T,x0:T,λ1:T)=log p(μ,R)+log p(z1:T|μ,R,x0:T,λ1:T)+log c
as shown in the step a, the prior distribution of the observation noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the gaussian-inverse Wishart distribution is as follows:
the log form of the likelihood distribution can be written according to the observation model:
and calculating to obtain:
wherein:
the posterior distribution of the mean vector of the observed noise is sufficientIs an average value ofThe posterior probability density of the scale matrix is based on the Gaussian distribution of the varianceFor the degree of freedom, toIs of inverse scaleInverse Wishart distribution of the matrix; in the actual sampling process, the following steps are required:
sample R(i+1)And further according to:
sampling mu(i+1)Obtaining an observation noise mean vector and a scale matrix sampling value of the current iteration cycle;
the method adopted in the step D3 is as follows:
according to the Bayes rule:
p(ξ,Q|x0:T)=cp(ξ,Q)p(x0:T|ξ,Q)
its logarithmic form is:
log p(ξ,Q|x0:T)=log p(ξ,Q)+log p(x0:T|ξ,Q)+log c
as shown in the step a, the prior distribution of the process noise mean and the scale matrix is modeled as gaussian-inverse Wishart distribution, and the logarithmic form of the prior distribution is as follows:
the log form of the likelihood distribution is written according to a kinematic model:
and calculating to obtain:
wherein:
process noise mean satisfied withIs an average value ofThe posterior probability density of the scale matrix is based on the Gaussian distribution of the varianceFor the degree of freedom, toThe inverse Wishart distribution of the inverse scale matrix is obtained; in the actual sampling process, the following steps are required:
sample Q(i+1)And further according to:
sample xi(i+1)Obtaining a process noise mean vector and a scale matrix sampling value of the current iteration cycle;
the method adopted in the step D4 is as follows:
according to the Bayes rule:
p(λk|xk,zk,μ,R,v)=cp(λk|v)p(zk|xk,λk,μ,R)
its logarithmic form is:
log p(λk|xk,zk,μ,R,v)=log p(λk|v)+log p(zk|xk,λk,μ,R)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
the posterior distribution of the noise auxiliary parameter is observed sufficientlyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling lambdak (i+1)Obtaining an observation noise auxiliary parameter sampling value of the current iteration period, wherein k is 1, 2,. T;
the method adopted in the step D5 is as follows:
according to the Bayes rule:
p(γk-1|xk,xk-1,ξ,Q,ω)=cp(γk-1|ω)p(xk|xk-1,γk-1,ξ,Q)
its logarithmic form is:
log p(γk-1|xk,xk-1,ξ,Q,ω)=log p(γk-1|ω)+log p(xk|xk-1,γk-1,ξ,Q)+log c
defined by the auxiliary parameters:
according to the observation model, the following results are obtained:
and further:
wherein:
process noise auxiliary parameter posterior distribution satisfied byAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling gammak-1 (i+1)Obtaining a process noise auxiliary parameter sampling value of the current iteration period, wherein k is 1, 2,. T;
the method adopted in the step D6 is as follows:
according to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(v)=(e0-1)log v-f0v+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
observing the posterior distribution of the degree of freedom of the noise to satisfyAs a shape parameter, ofIs the Gamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sample v(i+1)Obtaining an observation noise freedom degree sampling value of the current iteration period;
the method adopted in the step D7 is as follows:
according to the Bayes rule:
its logarithmic form is:
obtaining the following according to the prior distribution of the degree of freedom parameters:
log p(ω)=(g0-1)logω-h0ω+c1
according to the auxiliary parameter definition and the Stirling approximation:
and further:
wherein:
process noise degree of freedom posterior distribution satisfied byAs a shape parameter, ofTo be fastGamma distribution of the rate parameter; in the actual sampling process, the following steps are required:
sampling omega(i+1)And obtaining a process noise freedom degree sampling value of the current iteration period.
6. The method of claim 5, wherein the step E comprises the following steps:
after N times of iterative sampling, a state variable sampling set is obtained asSelecting N-N after reaching steady statebAnd taking the average value of the state samples of the sub-iteration period as a final state estimation value, namely:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011399115.4A CN112528479A (en) | 2020-12-01 | 2020-12-01 | Robust self-adaptive smoothing method based on Gibbs sampler |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011399115.4A CN112528479A (en) | 2020-12-01 | 2020-12-01 | Robust self-adaptive smoothing method based on Gibbs sampler |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112528479A true CN112528479A (en) | 2021-03-19 |
Family
ID=74998113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011399115.4A Pending CN112528479A (en) | 2020-12-01 | 2020-12-01 | Robust self-adaptive smoothing method based on Gibbs sampler |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112528479A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326618A (en) * | 2021-06-02 | 2021-08-31 | 江南大学 | Method for estimating initial conditions of culture medium in continuous fermentation process |
CN114252797A (en) * | 2021-12-17 | 2022-03-29 | 华中科技大学 | Uncertainty estimation-based lithium battery remaining service life prediction method |
CN117951653A (en) * | 2024-01-31 | 2024-04-30 | 兰州理工大学 | Smooth tracking method based on Student's t process regression |
-
2020
- 2020-12-01 CN CN202011399115.4A patent/CN112528479A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326618A (en) * | 2021-06-02 | 2021-08-31 | 江南大学 | Method for estimating initial conditions of culture medium in continuous fermentation process |
CN113326618B (en) * | 2021-06-02 | 2022-07-15 | 江南大学 | Method for estimating initial conditions of culture medium in continuous fermentation process |
CN114252797A (en) * | 2021-12-17 | 2022-03-29 | 华中科技大学 | Uncertainty estimation-based lithium battery remaining service life prediction method |
CN114252797B (en) * | 2021-12-17 | 2023-03-10 | 华中科技大学 | Uncertainty estimation-based lithium battery remaining service life prediction method |
CN117951653A (en) * | 2024-01-31 | 2024-04-30 | 兰州理工大学 | Smooth tracking method based on Student's t process regression |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112528479A (en) | Robust self-adaptive smoothing method based on Gibbs sampler | |
Yang et al. | Tracking the orientation and axes lengths of an elliptical extended object | |
Meeds et al. | GPS-ABC: Gaussian process surrogate approximate Bayesian computation | |
Colburn et al. | State estimation in wall-bounded flow systems. Part 3. The ensemble Kalman filter | |
Geweke et al. | Bayesian estimation of state-space models using the Metropolis–Hastings algorithm within Gibbs sampling | |
Zia et al. | An EM algorithm for nonlinear state estimation with model uncertainties | |
CN106483496A (en) | Based on CHAN algorithm with improve Newton iteration combine time difference positioning method | |
CN104462015A (en) | Method for updating state of fractional order linear discrete system for processing non-Gaussian Levy noise | |
CN111948601B (en) | Single-station pure-angle target positioning and tracking method under non-Gaussian noise condition | |
CN109388778A (en) | A kind of iteration volume point Unscented kalman filtering method | |
Limketkai et al. | Crf-filters: Discriminative particle filters for sequential state estimation | |
Bai et al. | A Robust Generalized $ t $ Distribution-Based Kalman Filter | |
CN114815619A (en) | Robot tracking method based on Kalman filtering under random measurement loss | |
CN107452017A (en) | A kind of maneuvering target tracking method based on expectation-maximization algorithm | |
Huang et al. | A bank of maximum a posteriori estimators for single-sensor range-only target tracking | |
CN104331087B (en) | Robust underwater sensor network target tracking method | |
CN112583380A (en) | Distributed multi-rate particle filtering algorithm based on convergence optimization | |
CN112468116B (en) | Self-adaptive smoothing method based on Gibbs sampler | |
Sun et al. | Variational Bayesian two-stage Kalman filter for systems with unknown inputs | |
Blanco et al. | An optimal filtering algorithm for non-parametric observation models in robot localization | |
CN110514209B (en) | Interactive multi-model combined navigation method | |
Ferrie et al. | Likelihood-free methods for quantum parameter estimation | |
Nguyen et al. | Improving SMC sampler estimate by recycling all past simulated particles | |
CN114897134A (en) | Method and system for estimating neural network uncertainty through data enhancement | |
CN115544425A (en) | Robust multi-target tracking method based on target signal-to-noise ratio characteristic estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210319 |