CN108319570B - Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device - Google Patents
Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device Download PDFInfo
- Publication number
- CN108319570B CN108319570B CN201810093126.6A CN201810093126A CN108319570B CN 108319570 B CN108319570 B CN 108319570B CN 201810093126 A CN201810093126 A CN 201810093126A CN 108319570 B CN108319570 B CN 108319570B
- Authority
- CN
- China
- Prior art keywords
- state
- fusion
- estimation
- covariance matrix
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method and a device for asynchronous multi-sensor space-time deviation joint estimation and compensation, wherein the method comprises the following steps: step a, calculating state sampling points and corresponding weights at k-1 fusion time; step b, calculating a prediction state sampling point at the k fusion moment, and a prediction state and prediction state error covariance matrix; step c, calculating a predicted measurement sampling point and predicting a measurement vector; step d, calculating an innovation covariance matrix and a cross covariance matrix between the state and observation; step e, determining an estimation state and an estimation state error covariance matrix at the k fusion moment; step f, reading target state estimation, space deviation estimation and time deviation estimation; step g, enabling k to be k +1, and repeating the steps to form closed-loop cycle operation; the device corresponds to the method. Therefore, under the condition of different sensor data rates, the state vector after dimension expansion is estimated, and through iteration, the target state estimation is obtained and meanwhile the effective estimation and compensation of space-time deviation are realized.
Description
Technical Field
The invention relates to the technical field of target tracking and positioning, in particular to a method and a device for asynchronous multi-sensor space-time deviation joint estimation and compensation.
Background
Eliminating the system error of the sensor measurement data and completing the time synchronization of the sensor information are the basis for realizing the correct fusion of the multi-sensor information, and the research of most of the current algorithms focuses on the problems of time alignment and space deviation registration.
A prediction for multiple sensor Tracking [ J ] multiple target-multiple Tracking: Advanced Applications,1990,1:155-185. A generalized least square method is provided to solve the problem of spatial deviation registration, the method utilizes measurement data and spatial deviation at each moment to derive a corresponding measurement equation in a two-dimensional plane, after receiving N measurement data, N measurement equations obtained are utilized to construct a corresponding linear equation, and estimation of a sensor spatial deviation vector is obtained according to a least square criterion. Zhou Y, Leung H, Blanket M. sensor alignment with an easy-centered-estimated-fixed correlation system [ J ]. IEEE Transactions on Aerospace and Electronic Systems,1999,35(2):410-418, a maximum likelihood method was proposed that estimates both sensor system bias and the state of a target simultaneously, and performs registration of spatial bias using stereo projection. The methods only deal with the problems of space deviation estimation and compensation, do not consider the problem of time delay in multi-sensor fusion, and can realize space deviation registration only under the condition that the time delay and the measurement time of the sensors are accurately known.
In the prior art, a method and a device for effectively estimating the space-time deviation and the target state of the asynchronous sensor do not exist.
In view of the above-mentioned shortcomings, the inventor of the present invention has finally obtained the present invention through long-term research and practice for the case where the sensor data rates are different.
Disclosure of Invention
In order to solve the technical defects, the technical scheme adopted by the invention is that firstly, an asynchronous multi-sensor space-time deviation joint estimation and compensation method is provided, which comprises the following steps:
step a, estimating a target state by using a filtering algorithm, determining an estimated state at the k-1 fusion moment and an estimated state error covariance matrix, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
b, predicting a predicted state sampling point, a predicted state and a predicted state error covariance matrix of the k fusion moment according to the state sampling point and the corresponding weight of the k-1 fusion moment;
step c, calculating a predicted measurement sampling point and a predicted measurement vector of the k fusion moment according to the predicted state sampling point of the k fusion moment;
step d, calculating an innovation covariance matrix and a cross covariance matrix between states and observations at the k fusion moment according to the predicted measurement sampling points and the predicted measurement vectors at the k fusion moment;
step e, determining an estimation state and an estimation state error covariance matrix at the k fusion moment according to the measurement data set at the k fusion moment, the prediction measurement vector, the innovation covariance matrix, the cross covariance matrix, the prediction state and the prediction state error covariance matrix;
step f, reading target state estimation, space deviation estimation and time deviation estimation from the estimation state at the k fusion moment, wherein the space deviation estimation and the time deviation estimation are the space deviation estimation of the sensor in the k fusion period and the relative time deviation estimation between the sensors; the target state estimation is the target state estimation after the k fusion period compensates the space deviation and the time deviation;
and g, enabling k to be k +1, repeating the steps to form closed loop cycle operation, and iterating the estimation state.
Preferably, in the step a, the target state is estimated by using a UKF filtering algorithm.
Preferably, in the step a, the state sampling points and the corresponding weights at the k-1 fusion time are calculated through insensitive change.
Preferably, in the step a, the specific calculation formula of the state sampling point and the corresponding weight at the k-1 fusion time is as follows:
in the formula, xij(k-1| k-1) and GjJ state sampling point and weight at k-1 fusion time, k represents serial number of current fusion time, m is dimension of state vector, and lambda is used for determining k-1 fusion time estimated valueThe scale parameter of the distribution state of the surrounding state sampling points xi meets the condition that (m + lambda) ≠ 0;is composed ofRow j or column j;and P (k-1| k-1) is an estimated state and an estimated state error covariance matrix of the dimension-extended state vector and the dimension-extended state error covariance matrix at the k-1 th fusion moment respectively.
Preferably, in the step b, the calculation formula of the predicted state sampling point, the predicted state and the predicted state error covariance matrix is:
ξj(k|k-1)=Γ(k)ξj(k-1|k-1)
wherein the content of the first and second substances,for the prediction state, P (k | k-1) is the prediction state error covariance matrix, ξj(k | k-1) is the predicted state sample point, ξ, for the kth fusion timej(k-1| k-1) is the state sampling point at the k-1 st fusion time, Γ (k) is the state transition matrix, GjTo predict the weight of the state sampling points, Δ Aj(k | k-1) is the predicted state error and Q (k) is the process noise covariance matrix.
Preferably, the specific formula of the state transition matrix Γ (k) is:
wherein, I2Representing a two-dimensional identity matrix, Γxx(k) Representing a state transition matrix, T, corresponding to a target state portion1Is the period of the sensor 1.
Preferably, in the step c, the calculation formula of the predicted measured sampling point is as follows:
ηj(k|k-1)=h(k,ξj(k|k-1))
wherein h () is a measurement matrix, ηj(k | k-1) is the predicted metrology sampling point.
Preferably, in the step c, the calculation formula of the predicted measurement vector is as follows:
in the formula (I), the compound is shown in the specification,is the predicted measurement vector at the kth fusion time, GjM is the dimension of the expanded dimension state vector for predicting the weight of the state sampling point.
Preferably, in the step d, the innovation covariance matrix S (k) and the cross covariance matrix P between the states and the observations arexz(k) The concrete formula is as follows:
wherein S (k) is an innovation covariance matrix, Pxz(k) Is a cross-covariance matrix between the states and observations, R (k) is a measured noise covariance matrix, Δ Zj(k | k-1) is the predicted measurement error, ηj(k | k-1) is the predicted measured sample point,is the predicted measurement vector, Δ A, at the kth fusion timej(k | k-1) is the predicted state error.
Secondly, an asynchronous sensor space-time deviation joint estimation and compensation device corresponding to the asynchronous multi-sensor space-time deviation joint estimation and compensation method is provided, which comprises:
the first calculation unit is used for estimating a target state through a filtering algorithm, determining an estimated state at the k-1 fusion moment and an estimated state error covariance matrix, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
the second calculation unit is used for predicting a predicted state sampling point, a predicted state and a predicted state error covariance matrix at the k fusion moment according to the state sampling point and the corresponding weight at the k-1 fusion moment;
the third calculation unit is used for calculating a predicted measurement sampling point and a predicted measurement vector of the k fusion moment according to the predicted state sampling point of the k fusion moment;
a fourth calculation unit, configured to calculate an innovation covariance matrix at the k-fusion time, and a cross covariance matrix between states and observations according to the predicted metrology sampling points and the predicted metrology vectors at the k-fusion time;
a fifth calculation unit, configured to determine an estimated state and an estimated state error covariance matrix at a k-fusion time according to the metrology data set at the k-fusion time, the predicted metrology vector, the innovation covariance matrix, the cross covariance matrix, the predicted state, and the predicted state error covariance matrix;
a reading unit for reading a target state estimate, a spatial bias estimate and a temporal bias estimate from the estimated states at k-fusion instants;
and an iteration control unit which iterates the estimation state by repeating the above units for corresponding data by making k equal to k +1 to form a closed loop operation.
Compared with the prior art, the invention has the beneficial effects that: under the condition that the data rates of the sensors are different, the relation between the sensor observation and a target state and space-time deviation is analyzed, the space-time deviation is used for expanding the dimension of the target state, the state vector after the dimension expansion is estimated, and the effective estimation and compensation of the space-time deviation are realized while the target state estimation is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below.
FIG. 1 is a flow chart of a space-time offset joint estimation and compensation method of an asynchronous sensor according to the present invention;
FIG. 2 is a block diagram of the space-time offset joint estimation and compensation device of the asynchronous sensor according to the present invention;
FIG. 3 is a comparison graph of time offset estimates in a single simulation of the present invention;
FIG. 4 is a comparison graph of range bias estimates in a single simulation of the present invention;
FIG. 5 is a comparison graph of angular deviation estimates in a single simulation of the present invention;
FIG. 6 is a partial enlarged view of a target tracking trajectory in a single simulation of the present invention;
FIG. 7 is a complete graph of target tracking trajectory in a single simulation of the present invention;
FIG. 8 is a graph of the RMS error versus time deviation estimated for 100 simulations of the invention;
FIG. 9 is a plot of the distance deviation estimation root mean square error versus the 100 simulations of the present invention;
FIG. 10 is a plot of the root mean square error of the angular deviation estimates for 100 simulations of the present invention;
FIG. 11 is a comparison graph of the root mean square error of the target position estimate in 100 simulations of the present invention;
FIG. 12 is a plot of the root mean square error of the target velocity estimates for 100 simulations of the present invention.
Detailed Description
The above and further features and advantages of the present invention are described in more detail below with reference to the accompanying drawings.
Winston Li, Henry Leung. Simultaneous Registration and Fusion of Multiple discrete Sensors for Cooperative Driving [ J ]. IEEE Transactions on Intelligent Transportation Systems,2004,5(2):84-98. A space-time deviation Registration method based on UKF is proposed, and the Registration of space-time deviations of a plurality of heterogeneous Sensors and the Fusion estimation of a target state are completed at the same time.
In consideration of the limitation of the method, aiming at the situation that the sensor data rates are different, the application provides an asynchronous multi-sensor space-time deviation joint estimation and compensation method and device, and the simultaneous estimation of the target state and the space-time deviation is realized on line by using a batch processing method.
Example 1
As shown in fig. 1, the asynchronous sensor space-time offset joint estimation and compensation method includes:
step a, estimating a target state by using a filtering algorithm, determining an estimated state and an estimated state error covariance matrix at the k-1 fusion moment, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
in the step, a UKF filtering algorithm is adopted to estimate the target state, so that the accuracy is high and the stability is good.
UKF (unscented Kalman Filter), the Chinese definition is lossless Kalman filtering, unscented Kalman filtering, or de-aroma Kalman filtering. Is the combination of lossless transform (UT) and standard Kalman filtering system, and the nonlinear system equation is suitable for the standard Kalman filtering system under linear assumption by the lossless transform.
And the state sampling points and the corresponding weights at the k-1 fusion moment are calculated by adopting insensitive transformation, so that the precision is high and the stability is good.
B, predicting a predicted state sampling point at the k fusion moment, a predicted state at the k fusion moment and a predicted state error covariance matrix according to the state sampling point at the k-1 fusion moment and the corresponding weight;
step c, calculating a predicted measurement sampling point at the k fusion moment and a predicted measurement vector at the k fusion moment according to the predicted state sampling point at the k fusion moment;
step d, calculating an innovation covariance matrix at the k fusion moment and a cross covariance matrix between states and observations according to the predicted measurement sampling point at the k fusion moment and the predicted measurement vector at the k fusion moment;
step e, determining an estimation state and an estimation state error covariance matrix at the k fusion moment according to the measurement data set, the prediction measurement vector, the innovation covariance matrix, the cross covariance matrix, the prediction state and the prediction state error covariance matrix at the k fusion moment;
step f, reading target state estimation, space deviation estimation and time deviation estimation from the estimation state at the k fusion moment, wherein the space deviation estimation and the time deviation estimation are the space deviation estimation of the sensor in the k fusion period and the relative time deviation estimation between the sensors; and the target state estimation is the target state estimation after the k fusion period compensates the space deviation and the time deviation.
And g, enabling k to be k +1, repeating the steps to form closed loop cycle operation, and iterating the estimation state.
Therefore, through iteration, the target state estimation is closer to the true value, and through continuous iteration, the target state estimation is infinitely close to the true value, so that the accuracy is greatly improved.
Therefore, under the condition that the data rates of the sensors are different, the relation between the sensor observation and the target state and the space-time deviation is analyzed, the space-time deviation is used for expanding the dimension of the target state, the state vector after the dimension expansion is estimated, and the effective estimation and compensation of the space-time deviation are realized while the target state estimation is obtained.
Example 2
The difference between the present embodiment and the asynchronous multi-sensor space-time offset joint estimation and compensation method described above is that, in step a, the specific calculation formula of the state sampling point and the corresponding weight at the k-1 fusion time is as follows:
wherein the number of state sampling points is 2m,
in the formula, xi is a state sampling point, and G is a weight; xij(k-1| k-1) and GjJ state sampling point and weight at k-1 fusion time, k represents serial number of current fusion time, m is dimension of state vector, and lambda is used for determining k-1 fusion time estimated valueThe scale parameter of the distribution state of the surrounding state sampling points xi meets the condition that (m + lambda) ≠ 0;is composed ofRow j or column j;and P (k-1| k-1) is the estimated state and the estimated state error covariance matrix of the extended dimensional state vector A (k-1) and the extended dimensional state error covariance matrix P (k-1) at the k-1 th fusion time, respectively.
The dimension expansion state vector and the dimension expansion state error covariance matrix at the kth fusion moment are respectively as follows:
where A (k) is the state vector of the target at the time of k fusion, (x (k), y (k)) is the position of the target at the time of k fusion,the speed of the target at the k-th fusion moment; Δ ρ 1(k) andfor the distance and angle deviations of the sensor 1 at the k-th fusion time, Δ ρ 2(k) anddistance and angle deviation of the sensor 2 at the k-th fusion moment, and delta t (k) is relative time deviation of the two sensors; pxxFor the target state error covariance matrix,andrespectively sensor 1 and sensor 2 spatial deviation covariance matrix, PtIs a time-offset covariance matrix.
The specific forms of the dimension expansion state vector A (k-1) and the dimension expansion state error covariance matrix P (k-1) at the k-1 th fusion moment can be obtained according to a similar method.
If the number s of the sensors is larger than or equal to 3, one of the sensors is required to be taken as a reference sensor, and the relative time deviation between the other sensors and the sensor is calculated. Without loss of generality, sensor 1 will be taken as a reference sensor. The resulting expanded dimension state vector at this timeSum state error covariance matrixAre respectively represented as
Where Δ ρs(k) Andrespectively the distance and angular deviation of the sensor s at the kth fusion moment,is the time offset of sensor s relative to sensor 1,is the spatial deviation covariance matrix of the sensor s,is the time-offset covariance matrix of sensor s relative to sensor 1.
Without loss of generality and for ease of description, the process steps of s-2 are still described in this application;
example 3
The asynchronous multi-sensor space-time offset joint estimation and compensation method as described above is different from the embodiment in that, in the step b, a predicted state sampling point ξ isj(k | k-1), predicted stateAnd a prediction state error covariance matrix P (k | k-1); in which predicted state sampling points ξj(k | k-1) denotes a state sampling point ξ based on the (k-1) th fusion timej(k-1| k-1) prediction of the sampling point of the state at the kth fusion timeMeasuring stateRepresenting state estimation based on the k-1 st fusion time instantFor the prediction of the state at the kth fusion time, the prediction state error covariance matrix P (k | k-1) represents the prediction of the state error covariance matrix at the kth fusion time based on the covariance matrix estimate P (k-1| k-1) at the kth fusion time; the left side of the vertical line represents the condition based on the k-1 fusion time, and the right side represents the prediction of the corresponding quantity of the k fusion time.
Wherein, the concrete formula is as follows:
ξj(k|k-1)=Γ(k)ξj(k-1|k-1)
wherein ξj(k | k-1) is the predicted state sample point, ξ, for the kth fusion timej(k-1| k-1) is the state sampling point at the k-1 st fusion time, Γ (k) is the state transition matrix, GjTo predict the weight of the state sampling points, Δ Aj(k | k-1) is the predicted state error and Q (k) is the process noise covariance matrix.
The specific formula of the state transition matrix Γ (k) is as follows:
wherein I2A 2-dimensional identity matrix is represented,Γxx(k) and expressing a state transition matrix corresponding to the target state part, wherein the specific formula is as follows:
in the formula T1Is the period of the sensor 1.
Example 4
The difference between the present embodiment and the above asynchronous multi-sensor space-time offset joint estimation and compensation method is that, in step c, the predicted measurement sampling point η and the predicted measurement vector of the kth fusion period are calculated by using the predicted state sampling pointThe concrete formula is as follows:
ηj(k|k-1)=h(k,ξj(k|k-1))
where h () is the measurement matrix, ηj(k | k-1) is the predicted measurement sampling point, the predicted measurement vector at the k-th fusion timePrediction of the k-th fusion time measurement data set Z (k); gjM is the dimension of the expanded dimension state vector for predicting the weight of the state sampling point.
Example 5
The difference between the asynchronous multi-sensor space-time offset joint estimation and compensation method described above is that in step d, the new covariance matrix s (k) and the cross covariance matrix P between the state and the observation are calculated by using the predicted measurement and the predicted measurement η pointxz(k) The concrete formula is as follows:
where R (k) is the measured noise covariance matrix, Δ Zj(k | k-1) is the predicted measurement error, ηj(k | k-1) is the predicted measured sample point,is the predicted measurement vector, Δ A, at the kth fusion timej(k | k-1) is the predicted state error.
Example 6
The difference between the asynchronous multi-sensor space-time offset joint estimation and compensation method described above in this embodiment is that, in step e, the specific formula is as follows:
P(k|k)=P(k|k-1)-K(k)S(k)K′(k)
whereinFor the estimated state, the estimated state vector is estimated for the k-th fusion time, P (k | k) is the estimated state error covariance matrix, the estimated state error covariance matrix is estimated for the k-th fusion time, k (k) is the kalman gain, and z (k) is the k-th fusion time measurement data set.
Wherein the state is estimatedThe method comprises the estimation of the space deviation and the relative time deviation of a target state and a sensor, thereby realizing the simultaneous estimation of the target state and the space-time deviation.
The calculation formula of the Kalman gain K (k) is as follows:
K(k)=Pxz(k)S(k)
the measurement equation of the k-th fusion time measurement data set z (k) is:
in the formula, in the measurement matrix h (A (k)),n is the m-th fusion cycle of the sensor 2iTime difference m between the actual measurement time corresponding to the individual observation data and the kth fusion time of sensor 1iIndicating that the current measurement data is the m-th data received by the sensor 2iMeasurement data; w (k) is white gaussian observation noise with a mean of zero and a covariance matrix of r (k);
in the formula, T1And T2The periods of sensor 1 and sensor 2, respectively;
wherein, the measurement data set z (k) is composed of the measurement data of two sensors in the k-th fusion cycle, and the specific formula is as follows:
Z1(k) for the measurement data of the sensor 1 in the k-th fusion cycle,to transmitThe measurement data set of the sensor 2 in the kth fusion period of the sensor 1 specifically includes:
wherein Z is2(k,mi) Where i is 1, …, n denotes the m-th position of the sensor 2iThe measured data falls into the kth fusion period of the sensor 1, n is the number of the measured data falling into the kth fusion period, the value of n is determined by utilizing the time stamp of the kth measured data of the sensor 1, the length of the fusion period and the time stamp of the measured data of the sensor 2, and m isiIndicating that the current measurement data is the m-th data received by the sensor 2iAnd (4) measuring data.
Example 7
As shown in fig. 2, the asynchronous sensor space-time offset joint estimation and compensation apparatus in this embodiment is an asynchronous sensor space-time offset joint estimation and compensation apparatus corresponding to the asynchronous sensor space-time offset joint estimation and compensation method, and includes:
the first calculation unit 1 is used for estimating a target state through a filtering algorithm, determining an estimated state and an estimated state error covariance matrix at the k-1 fusion moment, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
in the unit, a UKF filtering algorithm is adopted to estimate the target state, so that the accuracy is high and the stability is good.
UKF (unscented Kalman Filter), the Chinese definition is lossless Kalman filtering, unscented Kalman filtering, or de-aroma Kalman filtering. Is the combination of lossless transform (UT) and standard Kalman filtering system, and the nonlinear system equation is suitable for the standard Kalman filtering system under linear assumption by the lossless transform.
And the state sampling points and the corresponding weights at the k-1 fusion moment are calculated by adopting insensitive transformation, so that the precision is high and the stability is good.
The second calculation unit 2 is used for predicting the predicted state sampling point at the k fusion moment, the predicted state at the k fusion moment and a predicted state error covariance matrix according to the state sampling point at the k-1 fusion moment and the corresponding weight;
a third calculating unit 3, configured to calculate a predicted measurement sampling point at the k-fusion time and a predicted measurement vector at the k-fusion time according to the predicted state sampling point at the k-fusion time;
a fourth calculating unit 4, configured to calculate an innovation covariance matrix at the k-fusion time, and a cross covariance matrix between states and observations according to the predicted metrology sampling points at the k-fusion time and the predicted metrology vectors at the k-fusion time;
a fifth calculating unit 5, configured to determine an estimated state and an estimated state error covariance matrix at the k-fusion time according to the metrology data set at the k-fusion time, the predicted metrology vector, the innovation covariance matrix, the cross covariance matrix, the predicted state, and the predicted state error covariance matrix;
a reading unit 6, configured to read a target state estimation, a spatial bias estimation and a time bias estimation from the estimation state at the k-fusion time, where the spatial bias estimation and the time bias estimation are a spatial bias estimation and a relative time bias estimation between sensors of a k-th fusion cycle sensor; and the target state estimation is the target state estimation after the k fusion period compensates the space deviation and the time deviation.
And an iteration control unit 7 that iterates the estimation state by repeating the above units for corresponding data by making k +1 to form a closed loop operation.
Therefore, through iteration, the target state estimation is closer to the true value, and through continuous iteration, the target state estimation is infinitely close to the true value, so that the accuracy is greatly improved.
Therefore, under the condition that the data rates of the sensors are different, the relation between the sensor observation and the target state and the space-time deviation is analyzed, the space-time deviation is used for expanding the dimension of the target state, the state vector after the dimension expansion is estimated, and the effective estimation and compensation of the space-time deviation are realized while the target state estimation is obtained.
Example 8
The asynchronous multi-sensor space-time offset joint estimation and compensation device is different from the asynchronous multi-sensor space-time offset joint estimation and compensation device in the embodiment in that, in the first calculation unit 1, a specific calculation formula of a state sampling point and a corresponding weight at a k-1 fusion moment is as follows:
wherein the number of state sampling points is 2m,
in the formula, xi is a state sampling point, and G is a weight; xij(k-1| k-1) and GjJ state sampling point and weight at k-1 fusion time, k represents serial number of current fusion time, m is dimension of state vector, and lambda is used for determining k-1 fusion time estimated valueThe scale parameter of the distribution state of the surrounding state sampling points xi meets the condition that (m + lambda) ≠ 0;is composed ofRow j or column j;and P (k-1| k-1) is the estimated state and the estimated state error covariance matrix of the extended dimensional state vector A (k-1) and the extended dimensional state error covariance matrix P (k-1) at the k-1 th fusion time, respectively.
The dimension expansion state vector and the dimension expansion state error covariance matrix at the kth fusion moment are respectively as follows:
where A (k) is the state vector of the target at the time of k fusion, (x (k), y (k)) is the position of the target at the time of k fusion,the speed of the target at the k-th fusion moment; Δ ρ 1(k) andfor the distance and angle deviations of the sensor 1 at the k-th fusion time, Δ ρ 2(k) anddistance and angle deviation of the sensor 2 at the k-th fusion moment, and delta t (k) is relative time deviation of the two sensors; pxxFor the target state error covariance matrix,andrespectively sensor 1 and sensor 2 spatial deviation covariance matrix, PtIs a time-offset covariance matrix.
The specific forms of the dimension expansion state vector A (k-1) and the dimension expansion state error covariance matrix P (k-1) at the k-1 th fusion moment can be obtained according to a similar method.
If the number s of the sensors is larger than or equal to 3, one of the sensors is required to be taken as a reference sensor, and the relative time deviation between the other sensors and the sensor is calculated. Without loss of generality, sensor 1 will be taken as a reference sensor. The resulting expanded dimension state vector at this timeSum state error covariance matrixAre respectively represented as
Wherein Δ ρ s (k) andrespectively the distance and angular deviation of the sensor s at the kth fusion moment,is the time offset of sensor s relative to sensor 1,is the spatial deviation covariance matrix of the sensor s,is the time-offset covariance matrix of sensor s relative to sensor 1.
Without loss of generality and for ease of description, the process of s-2 is still described in this application;
example 9
The asynchronous multi-sensor space-time offset joint estimation and compensation device as described above is different from the embodiment in that, in the second calculation unit 2, a predicted state sampling point ξ isj(k | k-1), predicted stateAnd a prediction state error covariance matrix P (k | k-1); in which predicted state sampling points ξj(k | k-1) denotes a state sampling point ξ based on the (k-1) th fusion timej(k-1|k-1) Predicting state of sampling point of state at kth fusion momentRepresenting state estimation based on the k-1 st fusion time instantFor the prediction of the state at the kth fusion time, the prediction state error covariance matrix P (k | k-1) represents the prediction of the state error covariance matrix at the kth fusion time based on the covariance matrix estimate P (k-1| k-1) at the kth fusion time; the left side of the vertical line represents the condition based on the k-1 fusion time, and the right side represents the prediction of the corresponding quantity of the k fusion time.
Wherein, the concrete formula is as follows:
ξj(k|k-1)=Γ(k)ξj(k-1|k-1)
wherein ξj(k | k-1) is the predicted state sample point, ξ, for the kth fusion timej(k-1| k-1) is the state sampling point at the k-1 st fusion time, Γ (k) is the state transition matrix, GjTo predict the weight of the state sampling points, Δ Aj(k | k-1) is the predicted state error and Q (k) is the process noise covariance matrix.
The specific formula of the state transition matrix Γ (k) is as follows:
wherein I2Representing a 2-dimensional identity matrix, Γxx(k) And expressing a state transition matrix corresponding to the target state part, wherein the specific formula is as follows:
in the formula T1Is the period of the sensor 1.
Example 10
The difference between the asynchronous multi-sensor space-time offset joint estimation and compensation apparatus described above in this embodiment is that, in the third calculation unit 3, the predicted measurement sampling point η and the predicted measurement vector of the kth fusion period are calculated by using the predicted state sampling pointThe concrete formula is as follows:
ηj(k|k-1)=h(k,ξj(k|k-1))
where h () is the measurement matrix, ηj(k | k-1) is the predicted measurement sampling point, the predicted measurement vector at the k-th fusion timePrediction of the k-th fusion time measurement data set Z (k); gjM is the dimension of the expanded dimension state vector for predicting the weight of the state sampling point.
Example 11
The difference between the asynchronous multi-sensor space-time offset joint estimation and compensation apparatus described above in this embodiment is that, in the fourth calculation unit 4, the new covariance matrix s (k) and the cross covariance matrix P between the state and the observation are calculated by using the predicted measurement and the predicted measurement η pointxz(k) The concrete formula is as follows:
where R (k) is the measured noise covariance matrix, Δ Zj(k | k-1) is the predicted measurement error, ηj(k | k-1) is the predicted measured sample point,is the predicted measurement vector, Δ A, at the kth fusion timej(k | k-1) is the predicted state error.
Example 12
The asynchronous multi-sensor space-time offset joint estimation and compensation device described above is different from the asynchronous multi-sensor space-time offset joint estimation and compensation device described in this embodiment in that, in the fifth calculation unit 5, a specific formula is as follows:
P(k|k)=P(k|k-1)-K(k)S(k)K′(k)
whereinFor the estimated state, the estimated state vector is estimated for the k-th fusion time, P (k | k) is the estimated state error covariance matrix, the estimated state error covariance matrix is estimated for the k-th fusion time, k (k) is the kalman gain, and z (k) is the k-th fusion time measurement data set.
Wherein the state is estimatedIncluding target states and sensor spatial deviations and phasesAnd estimating the time deviation, thereby realizing the simultaneous estimation of the target state and the space-time deviation.
The calculation formula of the Kalman gain K (k) is as follows:
K(k)=Pxz(k)S(k)
the measurement equation of the k-th fusion time measurement data set z (k) is:
in the formula, in the measurement matrix h (A (k)),n is the m-th fusion cycle of the sensor 2iTime difference m between the actual measurement time corresponding to the individual observation data and the kth fusion time of sensor 1iIndicating that the current measurement data is the m-th data received by the sensor 2iMeasurement data; w (k) is white gaussian observation noise with a mean of zero and a covariance matrix of r (k);
in the formula, T1And T2The periods of sensor 1 and sensor 2, respectively;
wherein, the measurement data set z (k) is composed of the measurement data of two sensors in the k-th fusion cycle, and the specific formula is as follows:
Z1(k) for the measurement data of the sensor 1 in the k-th fusion cycle,the measurement data set of the sensor 2 in the kth fusion period of the sensor 1 specifically includes:
wherein Z is2(k,mi) Where i is 1, …, n denotes the m-th position of the sensor 2iThe measured data falls into the kth fusion period of the sensor 1, n is the number of the measured data falling into the kth fusion period, the value of n is determined by utilizing the time stamp of the kth measured data of the sensor 1, the length of the fusion period and the time stamp of the measured data of the sensor 2, and m isiIndicating that the current measurement data is the m-th data received by the sensor 2iAnd (4) measuring data.
Example 13
According to the asynchronous multi-sensor space-time offset joint estimation and compensation method and device, simulation is performed according to the implementation process of the asynchronous multi-sensor space-time offset joint estimation and compensation method and device, so that the specific effect of the asynchronous multi-sensor space-time offset joint estimation and compensation method and device is verified.
Firstly, the space-time deviation estimation and the target tracking track at each fusion time in single simulation are given, and the simulation results are shown in fig. 3-7.
As can be seen from fig. 3 to 5, as the filtering process proceeds, the estimates of the time offset and the space offset obtained by using the batch-based space-time offset joint estimation and compensation method (BP-ST-BR) adopted in the present invention gradually converge to the vicinity of the true value, which indicates that the algorithm can achieve effective estimation of the space-time offset; as can be seen from fig. 4-5, the conventional method (R-BR) does not consider the presence of time offset in the observed data during the processing, and the performance of the resulting distance and angle offset estimates is poor, illustrating the necessity of the method used in the present invention to consider time offset; as can be seen from fig. 6 and 7, after the measured flight path containing the space-time deviation is processed by the present invention, the offset between the obtained filtered flight path and the real flight path is greatly reduced, which indicates that the present invention can better estimate the target state.
Monte Carlo simulation is performed to verify the estimation performance of the algorithm, the method verifies the performance of the system through a large number of computer simulation tests and sums up statistical results, and the influence of estimation performance reduction caused by small-probability events on algorithm performance evaluation in single simulation is avoided; in order to measure the performance of the algorithm, a proper performance index needs to be selected, and the difference between an estimated value and a true value is usually measured by using a Root Mean Square Error (RMSE) in the filtering estimation problem; taking the target state as an example, the calculation formula is as follows:
wherein N is the number of times of simulation; carrying out Monte Carlo simulation on the data at each fusion moment for N times, and solving a root mean square error by using a weighted square sum of difference values of a real value and an estimated value; theoretically, the algorithm has good estimation performance, and the root mean square error curve shows a convergence trend as the estimated value gradually approaches to the true value along with the filtering process. The invention provides a simulation result of the root mean square error of the hollow time deviation estimation and the target state estimation in 100 Monte Carlo simulations.
As can be seen from fig. 8 to 12, the root mean square errors of the target state estimation and the space-time deviation estimation obtained by using the batch-based space-time deviation joint estimation and compensation method (BP-ST-BR) used in the present invention both exhibit a convergence trend, and a stable filtering state can be achieved as the filtering process proceeds, which illustrates the effectiveness of the method used in the present invention; the necessity of the method used in the present invention to take into account the temporal bias is illustrated by the poor rms error performance of the resulting target state estimate and spatial bias estimate, which is obtained from the conventional method (R-BR) of fig. 9-12, without taking into account the presence of temporal bias in the observed data during processing.
The foregoing is merely a preferred embodiment of the invention, which is intended to be illustrative and not limiting. It will be understood by those skilled in the art that various changes, modifications and equivalents may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. An asynchronous multi-sensor space-time offset joint estimation and compensation method is characterized by comprising the following steps:
step a, estimating a target state by using a filtering algorithm, determining an estimated state at the k-1 fusion moment and an estimated state error covariance matrix, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
b, predicting a predicted state sampling point, a predicted state and a predicted state error covariance matrix of the k fusion moment according to the state sampling point and the corresponding weight of the k-1 fusion moment;
step c, calculating a predicted measurement sampling point and a predicted measurement vector of the k fusion moment according to the predicted state sampling point of the k fusion moment;
step d, calculating an innovation covariance matrix and a cross covariance matrix between states and observations at the k fusion moment according to the predicted measurement sampling points and the predicted measurement vectors at the k fusion moment;
step e, determining an estimation state and an estimation state error covariance matrix at the k fusion moment according to the measurement data set at the k fusion moment, the prediction measurement vector, the innovation covariance matrix, the cross covariance matrix, the prediction state and the prediction state error covariance matrix;
step f, reading target state estimation, space deviation estimation and time deviation estimation from the estimation state at the k fusion moment, wherein the space deviation estimation and the time deviation estimation are the space deviation estimation of the sensor in the k fusion period and the relative time deviation estimation between the sensors; the target state estimation is the target state estimation after the k fusion period compensates the space deviation and the time deviation;
and g, enabling k to be k +1, repeating the steps to form closed loop cycle operation, and iterating the estimation state.
2. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 1, wherein in step a, a target state is estimated using a UKF filtering algorithm.
3. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 1, wherein in step a, the state sampling points and the corresponding weights at k-1 fusion time are calculated through insensitive change.
4. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 1, 2 or 3, wherein in step a, the specific calculation formula of the state sampling points and the corresponding weights at the k-1 fusion time is:
in the formula, xij(k-1| k-1) and GjJ state sampling point and weight at k-1 fusion time, k represents serial number of current fusion time, m is dimension of state vector, and lambda is used for determining k-1 fusion time estimated valueThe scale parameter of the distribution state of the surrounding state sampling points xi meets the condition that (m + lambda) ≠ 0;is composed ofRow j or column j;the estimated state of the dimension expansion state vector A (k-1) at the k-1 th fusion moment; p (k-1| k-1) is the estimated state error covariance matrix of the extended state error covariance matrix P (k-1) at the k-1 th fusion time.
5. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 4, wherein in the step b, the calculation formula of the predicted state sampling point, the predicted state and the predicted state error covariance matrix is:
ξj(k|k-1)=Γ(k)ξj(k-1|k-1)
wherein the content of the first and second substances,for the prediction state, P (k | k-1) is the prediction state error covariance matrix, ξj(k | k-1) is the predicted state sample point, ξ, for the kth fusion timej(k-1| k-1) is the state sampling point at the k-1 st fusion time, Γ (k) is the state transition matrix, GjTo predict the weight of the state sampling points, Δ Aj(k | k-1) is the predicted state error and Q (k) is the process noise covariance matrix.
6. The asynchronous multi-sensor space-time offset joint estimation and compensation method of claim 5, wherein the specific formula of the state transition matrix Γ (k) is:
wherein, I2Representing a two-dimensional identity matrix, Γxx(k) Representing a state transition matrix, T, corresponding to a target state portion1Is the period of the sensor 1.
7. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 6, wherein in step c, the calculation formula of the predicted measurement sampling points is as follows:
ηj(k|k-1)=h(k,ξj(k|k-1))
wherein h () is a measurement matrix, ηj(k | k-1) is the predicted metrology sampling point.
8. The asynchronous multi-sensor space-time offset joint estimation and compensation method according to claim 7, wherein in step c, the calculation formula of the predicted measurement vector is:
9. The asynchronous multi-sensor space-time bias joint estimation and compensation method of claim 8, wherein in step d, an innovation covariance matrix s (k) and a cross covariance matrix P between states and observations are usedxz(k) The concrete formula is as follows:
wherein S (k) is an innovation covariance matrix, Pxz(k) Is a cross-covariance matrix between the states and observations, R (k) is a measured noise covariance matrix, Δ Zj(k | k-1) is the predicted measurement error, ηj(k | k-1) is the predicted measured sample point,is the predicted measurement vector, Δ A, at the kth fusion timej(k | k-1) is the predicted state error.
10. An asynchronous multi-sensor space-time offset joint estimation and compensation apparatus corresponding to the asynchronous multi-sensor space-time offset joint estimation and compensation method of any one of claims 1 to 9, comprising:
the first calculation unit is used for estimating a target state through a filtering algorithm, determining an estimated state at the k-1 fusion moment and an estimated state error covariance matrix, and calculating a state sampling point and a corresponding weight at the k-1 fusion moment;
the second calculation unit is used for predicting a predicted state sampling point, a predicted state and a predicted state error covariance matrix at the k fusion moment according to the state sampling point and the corresponding weight at the k-1 fusion moment;
the third calculation unit is used for calculating a predicted measurement sampling point and a predicted measurement vector of the k fusion moment according to the predicted state sampling point of the k fusion moment;
a fourth calculation unit, configured to calculate an innovation covariance matrix at the k-fusion time, and a cross covariance matrix between states and observations according to the predicted metrology sampling points and the predicted metrology vectors at the k-fusion time;
a fifth calculation unit, configured to determine an estimated state and an estimated state error covariance matrix at a k-fusion time according to the metrology data set at the k-fusion time, the predicted metrology vector, the innovation covariance matrix, the cross covariance matrix, the predicted state, and the predicted state error covariance matrix;
a reading unit for reading a target state estimate, a spatial bias estimate and a temporal bias estimate from the estimated states at k-fusion instants;
and an iteration control unit which iterates the estimation state by repeating the above units for corresponding data by making k equal to k +1 to form a closed loop operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810093126.6A CN108319570B (en) | 2018-01-31 | 2018-01-31 | Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810093126.6A CN108319570B (en) | 2018-01-31 | 2018-01-31 | Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108319570A CN108319570A (en) | 2018-07-24 |
CN108319570B true CN108319570B (en) | 2021-06-08 |
Family
ID=62890242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810093126.6A Active CN108319570B (en) | 2018-01-31 | 2018-01-31 | Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108319570B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800819A (en) * | 2019-01-28 | 2019-05-24 | 哈尔滨工业大学 | Deviation compensation method and device when period arbitrary multisensor sky |
CN110221263B (en) * | 2019-07-03 | 2021-12-14 | 北京电子工程总体研究所 | Error estimation method and system for multi-sensor system |
CN111308124B (en) * | 2020-04-02 | 2021-09-24 | 中国航空工业集团公司北京长城计量测试技术研究所 | Method for determining time difference of speed measuring sensor of shock tube |
CN112146648B (en) * | 2020-09-23 | 2022-08-19 | 河北工业大学 | Multi-target tracking method based on multi-sensor data fusion |
CN112285697B (en) * | 2020-10-20 | 2023-09-26 | 哈尔滨工业大学 | Multi-sensor multi-target space-time deviation calibration and fusion method |
CN113433850B (en) * | 2021-06-04 | 2022-06-03 | 电子科技大学 | Method for repairing abnormal logic of FPGA (field programmable Gate array) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102783013A (en) * | 2010-03-16 | 2012-11-14 | 里特电子有限公司 | Methods and devices for the determination of slip frequency and for the automatic control of an asynchronous motor |
CN103714045A (en) * | 2014-01-09 | 2014-04-09 | 北京理工大学 | Information fusion estimation method for asynchronous multi-rate non-uniform sampled observation data |
CN107181438A (en) * | 2017-06-06 | 2017-09-19 | 哈尔滨工业大学深圳研究生院 | Speed Sensorless Control Method of Asynchronous Motor based on modified Q MRAS |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8565909B2 (en) * | 2010-02-24 | 2013-10-22 | Disney Enterprises, Inc. | Fabrication of materials with desired characteristics from base materials having determined characteristics |
US10453018B2 (en) * | 2014-01-14 | 2019-10-22 | Deere & Company | Agricultural information sensing and retrieval |
-
2018
- 2018-01-31 CN CN201810093126.6A patent/CN108319570B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102783013A (en) * | 2010-03-16 | 2012-11-14 | 里特电子有限公司 | Methods and devices for the determination of slip frequency and for the automatic control of an asynchronous motor |
CN103714045A (en) * | 2014-01-09 | 2014-04-09 | 北京理工大学 | Information fusion estimation method for asynchronous multi-rate non-uniform sampled observation data |
CN107181438A (en) * | 2017-06-06 | 2017-09-19 | 哈尔滨工业大学深圳研究生院 | Speed Sensorless Control Method of Asynchronous Motor based on modified Q MRAS |
Non-Patent Citations (2)
Title |
---|
"AUV assisted Asynchronous Localization for Underwater Sensor Networks";Jing Yan 等;《第35届中国控制会议论文集(E)》;20160727;第559-564页 * |
"超视距探测信息时空对齐误差链模型与补偿技术";任雪飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120515(第5期);第I140-176页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108319570A (en) | 2018-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108319570B (en) | Asynchronous multi-sensor space-time deviation joint estimation and compensation method and device | |
CN111985093B (en) | Adaptive unscented Kalman filtering state estimation method with noise estimator | |
CN106950562B (en) | State fusion target tracking method based on predicted value measurement conversion | |
CN103278813B (en) | State estimation method based on high-order unscented Kalman filtering | |
CN104713560B (en) | Multi-source distance measuring sensor spatial registration method based on expectation maximization | |
EP2098882A2 (en) | Location measurement method based on predictive filter | |
CN110208792B (en) | Arbitrary straight line constraint tracking method for simultaneously estimating target state and track parameters | |
CN110501696B (en) | Radar target tracking method based on Doppler measurement adaptive processing | |
CN109186601A (en) | A kind of laser SLAM algorithm based on adaptive Unscented kalman filtering | |
CN113074739A (en) | UWB/INS fusion positioning method based on dynamic robust volume Kalman | |
CN108318856A (en) | The target positioning of fast accurate and tracking under a kind of heterogeneous network | |
CN105043388A (en) | Vector search iterative matching method based on inertia/gravity matching integrated navigation | |
CN108896986A (en) | A kind of measurement conversion Sequential filter maneuvering target tracking method based on predicted value | |
CN106597498B (en) | Space-time deviation joint calibration method for multi-sensor fusion system | |
CN109507706B (en) | GPS signal loss prediction positioning method | |
CN111965618B (en) | Conversion measurement tracking method and system integrating Doppler measurement | |
CN110231620A (en) | A kind of noise correlation system tracking filter method | |
CN108871365B (en) | State estimation method and system under course constraint | |
CN111711432B (en) | Target tracking algorithm based on UKF and PF hybrid filtering | |
Su et al. | Underwater angle-only tracking with propagation delay and time-offset between observers | |
CN110657806A (en) | Position resolving method based on CKF, chan resolving and Savitzky-Golay smooth filtering | |
CN104021285B (en) | A kind of interactive multi-model method for tracking target with optimal motion pattern switching parameter | |
CN110677140B (en) | Random system filter containing unknown input and non-Gaussian measurement noise | |
CN101299271A (en) | Polynomial forecast model of maneuvering target state equation and tracking method | |
CN111340853B (en) | Multi-sensor GMPHD self-adaptive fusion method based on OSPA iteration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |