CN109246637B - Distributed sensor network collaborative registration method and system - Google Patents

Distributed sensor network collaborative registration method and system Download PDF

Info

Publication number
CN109246637B
CN109246637B CN201810950034.5A CN201810950034A CN109246637B CN 109246637 B CN109246637 B CN 109246637B CN 201810950034 A CN201810950034 A CN 201810950034A CN 109246637 B CN109246637 B CN 109246637B
Authority
CN
China
Prior art keywords
sensor
kalman filtering
module
node
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810950034.5A
Other languages
Chinese (zh)
Other versions
CN109246637A (en
Inventor
敬忠良
沈楷
董鹏
孙印帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810950034.5A priority Critical patent/CN109246637B/en
Publication of CN109246637A publication Critical patent/CN109246637A/en
Application granted granted Critical
Publication of CN109246637B publication Critical patent/CN109246637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Navigation (AREA)

Abstract

The invention provides a distributed sensor network collaborative registration method and a distributed sensor network collaborative registration system, wherein initial sensor registration error parameters and target parameters are formed at each sensor node; starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters; performing reverse Kalman filtering on the target parameters subjected to forward Kalman filtering; smoothing the target state estimation by each sensor node by using the results of the forward Kalman filtering step and the reverse Kalman filtering step; each sensor node uses the smoothed target state estimation obtained in the target state estimation smoothing step to solve the respective sensor registration error estimation value; if the EM iteration is not finished, returning to the forward Kalman filtering step; and if the EM iteration is finished, outputting respective sensor registration error estimated values by each sensor node. The registration process of the invention does not need a central node and a full connection structure between nodes, and the method is simple, effective and easy to implement.

Description

Distributed sensor network collaborative registration method and system
Technical Field
The invention relates to the technical field of communication, in particular to a distributed sensor network collaborative registration method and system.
Background
The distributed sensor network is a sensor network without a central node and full connection between nodes. In a distributed sensor network, each node can only communicate with a portion of the nodes within its communication range. The characteristic of the distributed sensor network endows the distributed sensor network with a flexible and changeable topological structure, strong environment adaptability and robustness to node failure in a system, and has wide application value and application prospect. However, while this feature of a distributed sensor network provides the aforementioned benefits and advantages, it also provides significant challenges to the information processing of the sensor network. A completely distributed information processing method is required for a structure without a central node and without full connection. In order to solve this problem, many scholars represented by r.olfati-Saber, g.battistelli, and the like have proposed a distributed sensor network information processing method based on a consistency (consensus) policy. According to the method, local communication between adjacent nodes in the distributed sensor network is utilized, consistency iteration is carried out between the adjacent nodes in the sensor network, and all nodes in the distributed sensor network can obtain globally consistent estimated values.
The core significance of the sensor network lies in that the external environment is sensed by the multi-source sensor, the system obtains higher precision compared with a single sensor through the fusion of multi-source information, and the performance of the whole sensor network system is improved. However, in practical system applications, the presence of sensor registration errors can significantly degrade the performance of the fusion and even lead to fusion failures. For this reason, the registration of the sensors is an essential and important link in the application of sensor networks. In centralized systems, the registration problem of sensors has been studied intensively, and many improved algorithms including Least squares (Least squares) method, Maximum likelihood (Maximum likelihood) method, and the like have been proposed. However, the foregoing methods all require information processing to be performed centrally at the central node. This requirement limits the application of these methods in distributed sensor networks.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a distributed sensor network collaborative registration method and system.
The invention provides a distributed sensor network collaborative registration method, which comprises the following steps:
an initialization step: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
a forward Kalman filtering step: starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters;
and (3) an inverse Kalman filtering step: performing reverse Kalman filtering on the target parameters subjected to forward Kalman filtering;
and a target state estimation smoothing step: smoothing the target state estimation by each sensor node by using the results of the forward Kalman filtering step and the reverse Kalman filtering step;
solving the sensor registration error estimation value: each sensor node uses the smoothed target state estimation obtained in the target state estimation smoothing step to solve the respective sensor registration error estimation value;
a judging step: if the EM iteration is not finished, returning to the forward Kalman filtering step; and if the EM iteration is finished, outputting respective sensor registration error estimated values by each sensor node.
Preferably, the forward kalman filtering in the forward kalman filtering step includes:
forward initial step filtering: performing forward consistency Kalman filtering by using the initial sensor registration error parameter and the initial target parameter in the initialization step as initial values;
forward non-initial step filtering: and performing forward consistency Kalman filtering by using the sensor registration error estimated value obtained in the sensor registration error estimated value solving step and the initial target parameter in the initialization step as initial values.
Preferably, the inverse kalman filtering in the inverse kalman filtering step includes:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained in the forward Kalman filtering step after the forward Kalman filtering and the initial sensor registration error parameters in the initialization step as initial values;
reverse non-initial step filtering: and performing inverse consistency Kalman filtering by using the target parameters obtained in the forward Kalman filtering step after the forward Kalman filtering and the sensor registration error parameter estimation value obtained in the sensor registration error estimation value solving step as initial values.
Preferably, the EM iteration adopts M-step EM iteration, the forward kalman filtering adopts N-step forward kalman filtering, and the inverse kalman filtering adopts N-step inverse kalman filtering;
wherein M, N is a positive integer.
The invention provides a distributed sensor network collaborative registration system, which comprises modules:
an initialization module: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
a forward Kalman filtering module: starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters;
an inverse Kalman filtering module: performing reverse Kalman filtering on the target parameters subjected to forward Kalman filtering;
a target state estimation smoothing module: each sensor node smoothes the target state estimation by using the results of the forward Kalman filtering module and the reverse Kalman filtering module;
a sensor registration error estimation value solving module: each sensor node uses the smoothed target state estimation obtained by the target state estimation smoothing module to solve the respective sensor registration error estimation value;
a judging module: if the EM iteration is not finished, returning to the forward Kalman filtering module; and if the EM iteration is finished, outputting respective sensor registration error estimated values by each sensor node.
Preferably, the forward kalman filtering in the forward kalman filtering module includes:
forward initial step filtering: performing forward consistency Kalman filtering by using an initial sensor registration error parameter and an initial target parameter in an initialization module as initial values;
forward non-initial step filtering: and performing forward consistency Kalman filtering by using the sensor registration error estimation value obtained by the sensor registration error estimation value solving module and an initial target parameter in the initialization module as initial values.
Preferably, the inverse kalman filtering in the inverse kalman filtering module includes:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the initial sensor registration error parameters in the initialization module as initial values;
reverse non-initial step filtering: and performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the sensor registration error parameter estimation value obtained by the sensor registration error estimation value solving module as initial values.
Preferably, the EM iteration adopts M-step EM iteration, the forward kalman filtering adopts N-step forward kalman filtering, and the inverse kalman filtering adopts N-step inverse kalman filtering;
wherein M, N is a positive integer.
Compared with the prior art, the invention has the following beneficial effects:
the invention can register the sensor network nodes in the redundant information of the distributed sensor network multi-sensor. The registration process does not need a central node and a full connection structure between nodes, each node is subjected to iterative calculation through local communication between adjacent nodes, the method is simple, effective and easy to implement, is particularly suitable for the application of a distributed sensor network without the central node, and can be widely applied to various fields such as robots, intelligent traffic, air traffic control, aerospace, aviation, navigation and the like.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a schematic diagram of a topology of a sensor network and a motion trajectory of a target according to an embodiment of the present invention;
FIG. 3 is a plot of angular registration error estimate as a function of EM iterations for an embodiment of the present invention;
fig. 4 is a plot of range registration error estimate versus number of EM iterations for an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
As shown in fig. 1, a distributed sensor network collaborative registration method provided by the present invention includes:
step S1: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
step S2: and starting M steps of EM iterative calculation. Each sensor node performs N-step forward consistency Kalman filtering on target parameters according to the measurement of N time points acquired by the sensor node and the information of adjacent nodes, wherein the N-step forward consistency Kalman filtering comprises forward initial step filtering and forward non-initial step filtering, and M, N is a positive integer; wherein:
forward initial step filtering: performing forward consistency Kalman filtering by using the initial sensor registration error parameter and the initial target parameter in the step S1 as initial values;
forward non-initial step filtering: and performing forward consistency Kalman filtering by using the sensor registration error estimated value obtained in the step S5 and the initial target parameter in the step S1 as initial values.
Step S3: each sensor node performs N-step reverse consistency Kalman filtering on target parameters subjected to N-step forward Kalman filtering according to the measurement of N time points acquired by the sensor node and the information of adjacent nodes, wherein the N-step reverse consistency Kalman filtering comprises reverse initial time filtering and reverse non-initial time filtering; wherein:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained in the S2 after the forward Kalman filtering and the initial sensor registration error parameters in the S1 as initial values;
reverse non-initial step filtering: and performing inverse consistency Kalman filtering by using the target parameters obtained in the step S2 after the forward Kalman filtering and the sensor registration error parameter estimation value obtained in the step S5 as initial values.
Step S4: each sensor node smoothes the target state estimate using the results of S2 and S3.
Step S5: each sensor node solves for a respective sensor registration error estimate using the smoothed target state estimate obtained at S4.
Step S6: if the EM iteration is not finished, returning to S2; and if the EM iteration is finished, outputting respective sensor registration error estimated values by each sensor node. (if M is M, then returning to S2; if M is M, then each sensor node outputs its own sensor registration error estimation value)
In step S1, each sensor node is given an initial target state
Figure BDA0001771257680000051
And corresponding error covariance matrix
Figure BDA0001771257680000052
And respective a priori registration errors etai(0)Wherein
Figure BDA0001771257680000053
Each representing a respective one of the different sensor nodes,
Figure BDA0001771257680000054
is a set of all sensor nodes.
In step S2, N recursive forward kalman filter calculations (k ═ 1, 2.., N) are performed, each recursive calculation including:
step S2.1, each sensor node i utilizes the target state at the moment k-1
Figure BDA0001771257680000055
Sum error covariance matrix
Figure BDA0001771257680000056
Estimating the target state to be predicted in one step, and respectively estimating the target state at the moment of k and the predicted value of the error covariance matrix
Figure BDA0001771257680000057
And
Figure BDA0001771257680000058
from the obtained predicted values, a predicted information matrix is calculated
Figure BDA0001771257680000059
And information vector
Figure BDA00017712576800000510
The calculation formula is as follows:
Figure BDA00017712576800000511
step S2.2, measuring each sensor node i according to k time
Figure BDA00017712576800000512
Calculating new information vectors
Figure BDA00017712576800000513
And new information matrix
Figure BDA00017712576800000514
The calculation formula is as follows:
Figure BDA00017712576800000515
Figure BDA00017712576800000516
Figure BDA00017712576800000517
in the formula: all superscripts i represent sensor nodes i,
Figure BDA00017712576800000518
sensor measurement equation, partial differential, representing time k
Figure BDA00017712576800000519
Subscript of
Figure BDA00017712576800000520
Is expressed as partial differential
Figure BDA00017712576800000521
The value of (A) is selected,
Figure BDA00017712576800000522
representing the measured noise variance matrix at time k for sensor i,
Figure BDA00017712576800000523
representing the measurement value of the sensor node i at time k, m representing the number of current EM iterations, ηi(m-1)Representing the registration error estimate for the sensor node i of the previous EM iteration.
Step S2.3, each sensor node i pair information matrix obtained in step S2.1 and step S2.2
Figure BDA00017712576800000524
Information vector
Figure BDA00017712576800000525
New information matrix
Figure BDA00017712576800000526
And new information vector
Figure BDA00017712576800000527
Carrying out L-step consistency iteration; computing unit for each step of consistency iteration of each sensor node iThe formula is as follows:
Figure BDA0001771257680000061
Figure BDA0001771257680000062
Figure BDA0001771257680000063
Figure BDA0001771257680000064
in the formula: all superscripts i represent sensor nodes i,
Figure BDA0001771257680000065
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure BDA0001771257680000066
Figure BDA0001771257680000067
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure BDA0001771257680000068
s2.4, measuring and updating each sensor node i;
the updated calculation formula is as follows:
Figure BDA0001771257680000069
Figure BDA00017712576800000610
in the formula: all superscripts i represent sensor nodes i,
Figure BDA00017712576800000611
representing the number of sensor nodes in the sensor network.
Step S2.5, estimating and extracting the target state at the current moment k, wherein the calculation formula is as follows:
Figure BDA00017712576800000612
Figure BDA00017712576800000613
in the formula: all superscripts i represent sensor nodes i,
Figure BDA00017712576800000614
a forward filtered estimate representing the target state at time k,
Figure BDA00017712576800000615
the estimated error variance corresponding to the forward filtering estimated value of the target at the moment k;
step S2.6, when k < N, k ═ k +1 and return to performing step S2.1; when k is equal to N, all are output
Figure BDA00017712576800000616
Figure BDA00017712576800000617
And
Figure BDA00017712576800000618
and
Figure BDA00017712576800000619
in step S3, N recursive inverse kalman filter calculations (k ═ N, N-1.., 1) are performed, each recursive calculation including:
s3.1, each sensor node i carries out one-step backward prediction on the target state by utilizing the target state and covariance estimation at the moment k to respectively obtain predicted values of the state and covariance at the moment k-1
Figure BDA00017712576800000620
And
Figure BDA00017712576800000621
from the obtained predicted values, a predicted information matrix is calculated
Figure BDA00017712576800000622
And information vector
Figure BDA00017712576800000623
The calculation formula is as follows:
Figure BDA00017712576800000624
step S3.2, measuring each sensor node i according to k time
Figure BDA0001771257680000071
Calculating new information vectors
Figure BDA0001771257680000072
And new information matrix
Figure BDA0001771257680000073
The calculation formula is as follows:
Figure BDA0001771257680000074
Figure BDA0001771257680000075
Figure BDA0001771257680000076
in the formula: all superscripts i represent sensor nodes i,
Figure BDA0001771257680000077
sensor measurement equation, partial differential, representing time k
Figure BDA0001771257680000078
Subscript of
Figure BDA0001771257680000079
Is expressed as partial differential
Figure BDA00017712576800000710
The value of (A) is selected,
Figure BDA00017712576800000711
representing the measured noise variance matrix at time k for sensor i,
Figure BDA00017712576800000712
representing the measurement value of a sensor node i at the moment k, representing that all superscripts b represent that the variable is a variable in the inverse Kalman filtering, representing the number of current EM iterations, etai(m-1)Representing the registration error estimate for the sensor node i of the previous EM iteration.
Step S3.3, each sensor node i pair information matrix obtained in step S3.1 and step S3.2
Figure BDA00017712576800000713
Information vector
Figure BDA00017712576800000714
New information matrix
Figure BDA00017712576800000715
And new information vector
Figure BDA00017712576800000716
Carrying out L-step consistency iteration; the calculation formula of each step of consistency iteration of each sensor node i is as follows:
Figure BDA00017712576800000717
Figure BDA00017712576800000718
Figure BDA00017712576800000719
Figure BDA00017712576800000720
in the formula: all superscripts i represent sensor nodes i,
Figure BDA00017712576800000721
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure BDA00017712576800000722
Figure BDA00017712576800000723
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure BDA00017712576800000724
s3.4, measuring and updating each sensor node i;
the updated calculation formula is as follows:
Figure BDA00017712576800000725
Figure BDA00017712576800000726
in the formula: all superscripts i represent sensor nodes i,
Figure BDA0001771257680000081
representing the number of sensor nodes in the sensor network.
Step S3.5, estimating and extracting the target state at the current moment k, wherein the calculation formula is as follows:
Figure BDA0001771257680000082
Figure BDA0001771257680000083
step S3.6, when k > 1, k ═ k-1 and return to performing step S3.1; when k is 1, all are output
Figure BDA0001771257680000084
Figure BDA0001771257680000085
Step S4 includes:
the sensor nodes i smooth the state estimation of the target by using the structures of the step S3 and the step S4 to obtain the smoothed state estimation
Figure BDA0001771257680000086
And corresponding covariance matrix
Figure BDA0001771257680000087
The calculation formula is as follows:
Figure BDA0001771257680000088
Figure BDA0001771257680000089
Figure BDA00017712576800000810
Figure BDA00017712576800000811
step S5 includes:
each sensor node i respectively calculates the registration error estimation value eta of the sensor node ii(m)The calculation formula is as follows:
Figure BDA00017712576800000812
Figure BDA00017712576800000813
Figure BDA00017712576800000814
in the formula: all superscripts i represent sensor nodes i,
Figure BDA00017712576800000815
sensor measurement equation, partial differential, representing time k
Figure BDA00017712576800000816
Subscript η ofi=ηi(m-1)Is expressed as partial differential at etai(m-1)Value of ηi(m-1)Representing the registration error estimate for the sensor node i of the previous EM iteration.
Step S6 includes: when the EM iteration is not ended, returning to execute step S2; when the EM iteration is finished, each sensor node i outputs etai(M)As an estimate of the registration error.
The technical solution of the present embodiment is further described in detail with reference to the accompanying drawings.
Step one, initializing each node of the sensor to form initial parameters.
At each sensor node
Figure BDA0001771257680000091
A priori registration error η for a given sensori(0)And initial target state estimation
Figure BDA0001771257680000092
And estimate error variance
Figure BDA0001771257680000093
Wherein the content of the first and second substances,
Figure BDA0001771257680000094
is a set of all sensor nodes.
And step two, at each step M (M is 1, 2.. multidot.M) of the EM iteration, calculating N-step forward consistency Kalman filtering by using local information and local information acquired by local iterative communication at each sensor node i. And step three, at each step M (M is 1, 2.. multidot.M) of the EM iteration, calculating N-step inverse consistency Kalman filtering by using local information and local information acquired by local iterative communication at each sensor node i.
Step four, at each step M (M is 1, 2.. multidot.m) of the EM iteration, each sensor node i smoothes the target state estimation by using the results of the step two and the step three.
Step five, at each step M (M is 1,2,.. multidot.M) of the EM iteration, each sensor node i utilizes the smoothed target state estimation obtained in the step four to solve the respective registration error estimation value
Step six, after M-step EM iteration is finished, each sensor node i outputs an estimation value eta of self-registration errori(M)
Consider the tracking problem of a two-dimensional plane. Consider a distributed sensor network consisting of 36 sensor nodes, each sensor being randomly deployed in a 5000m planar space, each sensor being able to communicate only with its neighboring nodes within its communication range. The sensor measures the azimuth angle and the relative distance to the target, and the measurement equation is
Figure BDA0001771257680000095
Wherein the content of the first and second substances,
Figure BDA0001771257680000096
in order to measure the noise, the noise is measured,
Figure BDA0001771257680000097
is the position coordinates of the sensor. The registration error of each sensor is constant and is initially distributed by Gaussian
Figure BDA0001771257680000098
Wherein is randomly generated, wherein0=[0.5°,5m]T,Pη=diag([(3°)2,(3.3m)2]) The state of the target at the initial time is
x0=[1700m,18m/s,4200m,-12m/s]T
Initial state of filter
Figure BDA0001771257680000099
From a Gaussian distribution N (x)0,P0) Random sample generation, wherein
P0=diag([102m2,3.22m2/s2,102m2,3.22m2/s2]T)
The target motion adopts a constant velocity model (CV). The topology of the sensor network and the motion trajectory of the target are shown in fig. 2.
After the initial values and the simulation parameters are given, the specific steps are as follows:
step S1: EM iteration
Given a
Figure BDA0001771257680000101
ηi(0)(setting EM iteration initial value)
For 1.. times, M (For each M1.. times, M, the following operations are performed)
Step S2.1: n-step forward consistency Kalman filtering
For k 1.., N (For each k 1.., N, the following operations are performed)
And (3) prediction: each sensor node i independently predicts the target state by adopting the prediction step of Kalman filtering, and calculates the predicted target state
Figure BDA0001771257680000102
Sum estimation error variance matrix
Figure BDA0001771257680000103
And calculating corresponding information matrix
Figure BDA0001771257680000104
And information vector
Figure BDA0001771257680000105
Calculating new information:
Figure BDA0001771257680000106
Figure BDA0001771257680000107
consistency iteration:
for 1.. times, L (For each L ═ 1.. times, L, the following operations are performed)
Figure BDA0001771257680000108
Figure BDA0001771257680000109
Figure BDA00017712576800001010
Figure BDA00017712576800001011
End (End)
Updating:
Figure BDA00017712576800001012
Figure BDA00017712576800001013
end (End)
Step S2.2: n-step inverse consistency Kalman filtering
For k 1.., N (For each k 1.., N, the following operations are performed)
And (3) backward prediction: each sensor node i independently performs target state reverse prediction by adopting prediction steps of Kalman filtering, and calculates the predicted target state
Figure BDA00017712576800001014
Sum estimation error variance matrix
Figure BDA00017712576800001015
And calculating corresponding information matrix
Figure BDA00017712576800001016
And information vector
Figure BDA00017712576800001017
Calculating new information:
Figure BDA0001771257680000111
Figure BDA0001771257680000112
consistency iteration:
for 1.. times, L (For each L ═ 1.. times, L, the following operations are performed)
Figure BDA0001771257680000113
Figure BDA0001771257680000114
Figure BDA0001771257680000115
Figure BDA0001771257680000116
End (End)
Updating:
Figure BDA0001771257680000117
Figure BDA0001771257680000118
end (End)
Step S2.3: smoothing target state estimates
Figure BDA0001771257680000119
Figure BDA00017712576800001110
Figure BDA00017712576800001111
Figure BDA00017712576800001112
Step S2.4: solving registration error estimates for each sensor i
Figure BDA00017712576800001113
Figure BDA00017712576800001114
Figure BDA00017712576800001115
End (End)
Step S2: output η of each sensor node ii(M)As an estimate of the registration error.
The present embodiment uses Matlab language to test the proposed algorithm at different number L of consistent iterations and compare with the centralized sensor EM registration algorithm. Fig. 3 and 4 show the variation of sensor angular and range registration error estimates as a function of the number of EM iterations, respectively.
As can be seen from fig. 3 and 4, the proposed method can perform registration on sensor nodes in a distributed sensor network under the condition of distributed non-fusion center, and the registration convergence speed gradually approaches to a centralized EM registration method along with the increase of the number of consistency iterations.
The distributed sensor network collaborative registration method provided by the embodiment is a distributed sensor network registration algorithm. In particular, the present invention relates to a distributed registration method based on a consistency (consensus) method and an Expectation Maximization (EM) method. The method embeds the consistent iteration of the consistent algorithm into the calculation process of the EM iteration, so that the conditional expectation of a log-likelihood function can be calculated in a fully distributed mode, and the estimated value of the sensor registration error is solved by maximizing the conditional expectation. Simulation results show that the embodiment can effectively register each sensor node in the distributed sensor network. The present embodiments may be applied to sensor registration scenarios for various types of distributed sensor networks.
On the basis of the distributed sensor network collaborative registration method, the invention also provides a distributed sensor network collaborative registration system, which comprises the following steps:
an initialization module: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
a forward Kalman filtering module: starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters;
an inverse Kalman filtering module: carrying out reverse Kalman filtering on the target parameters subjected to the N-step forward Kalman filtering;
a target state estimation smoothing module: each sensor node smoothes the target state estimation by using the results of the forward Kalman filtering module and the reverse Kalman filtering module;
a sensor registration error estimation value solving module: each sensor node uses the smoothed target state estimation obtained by the target state estimation smoothing module to solve the respective sensor registration error estimation value;
a judging module: if the EM iteration is not finished, returning to the forward Kalman filtering module; and if the EM iteration is finished, outputting respective sensor registration error estimated values by each sensor node.
The forward kalman filtering in the forward kalman filtering module includes:
forward initial step filtering: performing forward consistency Kalman filtering by using an initial sensor registration error parameter and an initial target parameter in an initialization module as initial values;
forward non-initial step filtering: and performing forward consistency Kalman filtering by using the sensor registration error estimation value obtained by the sensor registration error estimation value solving module and an initial target parameter in the initialization module as initial values.
The inverse kalman filtering in the inverse kalman filtering module includes:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the initial sensor registration error parameters in the initialization module as initial values;
reverse non-initial step filtering: and performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the sensor registration error parameter estimation value obtained by the sensor registration error estimation value solving module as initial values.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (4)

1. A distributed sensor network collaborative registration method is characterized by comprising the following steps:
an initialization step: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
a forward Kalman filtering step: starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters;
and (3) an inverse Kalman filtering step: performing reverse Kalman filtering on the target parameters subjected to forward Kalman filtering;
and a target state estimation smoothing step: smoothing the target state estimation by each sensor node by using the results of the forward Kalman filtering step and the reverse Kalman filtering step;
solving the sensor registration error estimation value: each sensor node uses the smoothed target state estimation obtained in the target state estimation smoothing step to solve the respective sensor registration error estimation value;
a judging step: if the EM iteration is not finished, returning to the forward Kalman filtering step; if the EM iteration is finished, each sensor node outputs a respective sensor registration error estimation value;
the forward kalman filtering in the forward kalman filtering step includes:
forward initial step filtering: performing forward consistency Kalman filtering by using the initial sensor registration error parameter and the initial target parameter in the initialization step as initial values;
forward non-initial step filtering: performing forward consistency Kalman filtering by using the sensor registration error estimated value obtained in the sensor registration error estimated value solving step and the initial target parameter in the initialization step as initial values;
the inverse kalman filtering in the inverse kalman filtering step includes:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained in the forward Kalman filtering step after the forward Kalman filtering and the initial sensor registration error parameters in the initialization step as initial values;
reverse non-initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained in the forward Kalman filtering step after the forward Kalman filtering and the sensor registration error parameter estimation value obtained in the sensor registration error estimation value solving step as initial values;
in the initialization step, initial target states are respectively given to all the sensor nodes
Figure FDA0002633634360000011
And corresponding error covarianceMatrix of
Figure FDA0002633634360000012
And respective a priori registration errors etai(0)Wherein
Figure FDA0002633634360000013
Each representing a respective one of the different sensor nodes,
Figure FDA0002633634360000014
a set formed by all sensor nodes;
in the forward kalman filtering step, N recursive forward kalman filtering calculations (k ═ 1, 2.., N) are performed, each recursive calculation step including:
step S2.1, each sensor node i utilizes the target state at the moment k-1
Figure FDA0002633634360000015
Sum error covariance matrix
Figure FDA0002633634360000016
Estimating the target state to be predicted in one step, and respectively estimating the target state at the moment of k and the predicted value of the error covariance matrix
Figure FDA0002633634360000021
And
Figure FDA0002633634360000022
from the obtained predicted values, a predicted information matrix is calculated
Figure FDA0002633634360000023
And information vector
Figure FDA0002633634360000024
The calculation formula is as follows:
Figure FDA0002633634360000025
step S2.2, measuring each sensor node i according to k time
Figure FDA0002633634360000026
Calculating new information vectors
Figure FDA0002633634360000027
And new information matrix
Figure FDA0002633634360000028
The calculation formula is as follows:
Figure FDA0002633634360000029
Figure FDA00026336343600000210
Figure FDA00026336343600000211
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000212
sensor measurement equation, partial differential, representing time k
Figure FDA00026336343600000213
Subscript of
Figure FDA00026336343600000214
Is expressed as partial differential
Figure FDA00026336343600000215
The value of (A) is selected,
Figure FDA00026336343600000216
representing the measured noise variance matrix at time k for sensor i,
Figure FDA00026336343600000217
representing the measurement value of the sensor node i at time k, m representing the number of current EM iterations, ηi(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
step S2.3, each sensor node i pair information matrix obtained in step S2.1 and step S2.2
Figure FDA00026336343600000218
Information vector
Figure FDA00026336343600000219
New information matrix
Figure FDA00026336343600000220
And new information vector
Figure FDA00026336343600000221
Carrying out L-step consistency iteration; the calculation formula of each step of consistency iteration of each sensor node i is as follows:
Figure FDA00026336343600000222
Figure FDA00026336343600000223
Figure FDA00026336343600000224
Figure FDA00026336343600000225
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000226
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure FDA00026336343600000227
Figure FDA00026336343600000228
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure FDA00026336343600000229
s2.4, measuring and updating each sensor node i;
the updated calculation formula is as follows:
Figure FDA0002633634360000031
Figure FDA0002633634360000032
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000033
representing the number of sensor nodes in the sensor network;
step S2.5, estimating and extracting the target state at the current moment k, wherein the calculation formula is as follows:
Figure FDA0002633634360000034
Figure FDA0002633634360000035
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000036
a forward filtered estimate representing the target state at time k,
Figure FDA0002633634360000037
the estimated error variance corresponding to the forward filtering estimated value of the target at the moment k;
step S2.6, when k < N, k ═ k +1 and return to performing step S2.1; when k is equal to N, all are output
Figure FDA0002633634360000038
Figure FDA0002633634360000039
And
Figure FDA00026336343600000310
and
Figure FDA00026336343600000311
the inverse kalman filtering step performs N-step recursive inverse kalman filtering calculations (k ═ N, N-1.., 1), each step comprising:
s3.1, each sensor node i carries out one-step backward prediction on the target state by utilizing the target state and covariance estimation at the moment k to respectively obtain predicted values of the state and covariance at the moment k-1
Figure FDA00026336343600000312
And
Figure FDA00026336343600000313
from the obtained predicted values, a predicted information matrix is calculated
Figure FDA00026336343600000314
And information vector
Figure FDA00026336343600000315
The calculation formula is as follows:
Figure FDA00026336343600000316
step S3.2, measuring each sensor node i according to k time
Figure FDA00026336343600000317
Calculating new information vectors
Figure FDA00026336343600000318
And new information matrix
Figure FDA00026336343600000319
The calculation formula is as follows:
Figure FDA00026336343600000320
Figure FDA00026336343600000321
Figure FDA00026336343600000322
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000323
sensor measurement equation, partial differential, representing time k
Figure FDA00026336343600000324
Subscript of
Figure FDA00026336343600000325
Is expressed as partial differential
Figure FDA00026336343600000326
The value of (A) is selected,
Figure FDA00026336343600000327
representing the measured noise variance matrix at time k for sensor i,
Figure FDA00026336343600000328
representing the measurement value of a sensor node i at the moment k, representing that all superscripts b represent that the variable is a variable in the inverse Kalman filtering, representing the number of current EM iterations, etai(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
step S3.3, each sensor node i pair information matrix obtained in step S3.1 and step S3.2
Figure FDA0002633634360000041
Information vector
Figure FDA0002633634360000042
New information matrix
Figure FDA0002633634360000043
And new information vector
Figure FDA0002633634360000044
Carrying out L-step consistency iteration; the calculation formula of each step of consistency iteration of each sensor node i is as follows:
Figure FDA0002633634360000045
Figure FDA0002633634360000046
Figure FDA0002633634360000047
Figure FDA0002633634360000048
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000049
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure FDA00026336343600000410
Figure FDA00026336343600000411
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure FDA00026336343600000412
s3.4, measuring and updating each sensor node i;
the updated calculation formula is as follows:
Figure FDA00026336343600000413
Figure FDA00026336343600000414
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000415
representing the number of sensor nodes in the sensor network;
step S3.5, estimating and extracting the target state at the current moment k, wherein the calculation formula is as follows:
Figure FDA00026336343600000416
Figure FDA00026336343600000417
step S3.6, when k > 1, k ═ k-1 and return to performing step S3.1; when k is 1, all are output
Figure FDA00026336343600000418
Figure FDA00026336343600000419
The target state estimate smoothing step includes:
the sensor nodes i smooth the state estimation of the target by using the structures of the step S3 and the step S4 to obtain the smoothed state estimation
Figure FDA00026336343600000420
And corresponding covariance matrix
Figure FDA00026336343600000421
The calculation formula is as follows:
Figure FDA0002633634360000051
Figure FDA0002633634360000052
Figure FDA0002633634360000053
Figure FDA0002633634360000054
the sensor registration error estimation value solving step comprises the following steps:
each sensor node i respectively calculates the registration error estimation value eta of the sensor node ii(m)The calculation formula is as follows:
Figure FDA0002633634360000055
Figure FDA0002633634360000056
Figure FDA0002633634360000057
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000058
sensor measurement equation, partial differential, representing time k
Figure FDA0002633634360000059
Subscript η ofi=ηi(m-1)Is expressed as partial differential at etai(m-1)Value of ηi(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
the judging step comprises: when the EM iteration is not finished, returning to the step of executing the forward Kalman filtering; when the EM iteration is finished, each sensor node i outputs etai(M)As an estimate of the registration error.
2. The method for collaborative registration of a distributed sensor network according to claim 1, wherein the EM iteration employs an M-step EM iteration, the forward kalman filtering employs an N-step forward kalman filtering, and the inverse kalman filtering employs an N-step inverse kalman filtering;
wherein M, N is a positive integer.
3. A distributed sensor network collaborative registration system is characterized by comprising modules:
an initialization module: forming initial sensor registration error parameters according to prior information and forming initial target parameters according to a given initial target state and an error covariance matrix at each sensor node;
a forward Kalman filtering module: starting EM iterative computation, and carrying out forward Kalman filtering on the target parameters;
an inverse Kalman filtering module: performing reverse Kalman filtering on the target parameters subjected to forward Kalman filtering;
a target state estimation smoothing module: each sensor node smoothes the target state estimation by using the results of the forward Kalman filtering module and the reverse Kalman filtering module;
a sensor registration error estimation value solving module: each sensor node uses the smoothed target state estimation obtained by the target state estimation smoothing module to solve the respective sensor registration error estimation value;
a judging module: if the EM iteration is not finished, returning to the forward Kalman filtering module; if the EM iteration is finished, each sensor node outputs a respective sensor registration error estimation value;
the forward Kalman filtering in the forward Kalman filtering module comprises:
forward initial step filtering: performing forward consistency Kalman filtering by using an initial sensor registration error parameter and an initial target parameter in an initialization module as initial values;
forward non-initial step filtering: performing forward consistency Kalman filtering by using the sensor registration error estimation value obtained by the sensor registration error estimation value solving module and an initial target parameter in the initialization module as initial values;
the inverse Kalman filtering in the inverse Kalman filtering module comprises:
and (3) reverse initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the initial sensor registration error parameters in the initialization module as initial values;
reverse non-initial step filtering: performing inverse consistency Kalman filtering by using the target parameters obtained after the forward Kalman filtering in the forward Kalman filtering module and the sensor registration error parameter estimation value obtained by the sensor registration error estimation value solving module as initial values;
in the initialization module, initial target states are respectively given to all the sensor nodes
Figure FDA0002633634360000061
And corresponding error covariance matrix
Figure FDA0002633634360000062
And respective a priori registration errors etai(0)Wherein
Figure FDA0002633634360000063
Each representing a respective one of the different sensor nodes,
Figure FDA0002633634360000064
a set formed by all sensor nodes;
the forward kalman filtering module performs N-step recursive forward kalman filtering calculations (k ═ 1, 2.., N), each step of recursive calculation including:
module S2.1, each sensor node i uses the target state at the moment k-1
Figure FDA0002633634360000065
Sum error covariance matrix
Figure FDA0002633634360000066
Estimating the target state to be predicted in one step, and respectively estimating the target state at the moment of k and the predicted value of the error covariance matrix
Figure FDA0002633634360000067
And
Figure FDA0002633634360000068
from the obtained predicted values, a predicted information matrix is calculated
Figure FDA0002633634360000069
And information vector
Figure FDA00026336343600000610
The calculation formula is as follows:
Figure FDA00026336343600000611
module S2.2, measurement of each sensor node i according to the time k
Figure FDA00026336343600000612
Calculating new information vectors
Figure FDA00026336343600000613
And new information matrix
Figure FDA00026336343600000614
The calculation formula is as follows:
Figure FDA00026336343600000615
Figure FDA00026336343600000616
Figure FDA0002633634360000071
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000072
sensor measurement equation, partial differential, representing time k
Figure FDA0002633634360000073
Subscript of
Figure FDA0002633634360000074
Is expressed as partial differential
Figure FDA0002633634360000075
The value of (A) is selected,
Figure FDA0002633634360000076
representing the measured noise variance matrix at time k for sensor i,
Figure FDA0002633634360000077
representing the measurement value of the sensor node i at time k, m representing the number of current EM iterations, ηi(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
module S2.3, information matrix obtained by each sensor node i for module S2.1 and module S2.2
Figure FDA0002633634360000078
Information vector
Figure FDA0002633634360000079
New information matrix
Figure FDA00026336343600000710
And new information vector
Figure FDA00026336343600000711
Carrying out L-step consistency iteration; the calculation formula of each step of consistency iteration of each sensor node i is as follows:
Figure FDA00026336343600000712
Figure FDA00026336343600000713
Figure FDA00026336343600000714
Figure FDA00026336343600000715
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000716
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure FDA00026336343600000717
Figure FDA00026336343600000718
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure FDA00026336343600000719
a module S2.4, each sensor node i carries out measurement updating;
the updated calculation formula is as follows:
Figure FDA00026336343600000720
Figure FDA00026336343600000721
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000722
representing the number of sensor nodes in the sensor network;
module S2.5, the state of the k target at the current time is estimated and extracted, and the calculation formula is as follows:
Figure FDA00026336343600000723
Figure FDA00026336343600000724
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000725
a forward filtered estimate representing the target state at time k,
Figure FDA00026336343600000726
the estimated error variance corresponding to the forward filtering estimated value of the target at the moment k;
a module S2.6, when k is less than N, k equals k +1 and returns to execute the module S2.1; when k is equal to N, all are output
Figure FDA0002633634360000081
Figure FDA0002633634360000082
And
Figure FDA0002633634360000083
and
Figure FDA0002633634360000084
the inverse kalman filter module performs N-step recursive inverse kalman filter calculations (k ═ N, N-1.., 1), each step of the recursive calculations including:
and a module S3.1, each sensor node i carries out one-step backward prediction on the target state by utilizing the target state and covariance estimation at the moment k to respectively obtain predicted values of the state and covariance at the moment k-1
Figure FDA0002633634360000085
And
Figure FDA0002633634360000086
from the obtained predicted values, a predicted information matrix is calculated
Figure FDA0002633634360000087
And information vector
Figure FDA0002633634360000088
The calculation formula is as follows:
Figure FDA0002633634360000089
module S3.2, measurement of each sensor node i according to the time k
Figure FDA00026336343600000810
Calculating new information vectors
Figure FDA00026336343600000811
And new information matrix
Figure FDA00026336343600000812
The calculation formula is as follows:
Figure FDA00026336343600000813
Figure FDA00026336343600000814
Figure FDA00026336343600000815
in the formula: all superscripts i represent sensor nodes i,
Figure FDA00026336343600000816
sensor measurement equation, partial differential, representing time k
Figure FDA00026336343600000817
Subscript of
Figure FDA00026336343600000818
Is expressed as partial differential
Figure FDA00026336343600000819
The value of (A) is selected,
Figure FDA00026336343600000820
representing the measured noise variance matrix at time k for sensor i,
Figure FDA00026336343600000821
representing the measurement value of a sensor node i at the moment k, representing that all superscripts b represent that the variable is a variable in the inverse Kalman filtering, representing the number of current EM iterations, etai(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
module S3.3, information matrix obtained by each sensor node i for module S3.1 and module S3.2
Figure FDA00026336343600000822
Information vector
Figure FDA00026336343600000823
New information matrix
Figure FDA00026336343600000824
And new information vector
Figure FDA00026336343600000825
Carrying out L-step consistency iteration; the calculation formula of each step of consistency iteration of each sensor node i is as follows:
Figure FDA00026336343600000826
Figure FDA00026336343600000827
Figure FDA00026336343600000828
Figure FDA0002633634360000091
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000092
represents a set formed by all nodes capable of directly communicating with the node i, including the node i, j represents all nodes capable of directly communicating with the node i, including the node i, L represents the step number of the current consistency stack, and L is 1,2
Figure FDA0002633634360000093
Figure FDA0002633634360000094
πi,jSatisfy pi for consistency weighti,jIs not less than 0 and
Figure FDA0002633634360000095
a module S3.4, each sensor node i carries out measurement updating;
the updated calculation formula is as follows:
Figure FDA0002633634360000096
Figure FDA0002633634360000097
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000098
representing the number of sensor nodes in the sensor network;
and a module S3.5, estimating and extracting the state of the k target at the current moment, wherein the calculation formula is as follows:
Figure FDA0002633634360000099
Figure FDA00026336343600000910
a module S3.6, when k > 1, k ═ k-1 and returns to execute module S3.1; when k is 1, all are output
Figure FDA00026336343600000911
Figure FDA00026336343600000912
The target state estimate smoothing module includes:
each sensor node i utilizes the structures of the module S3 and the module S4 to smooth the state estimation of the target to obtain the state estimation after smoothing
Figure FDA00026336343600000913
And corresponding covariance matrix
Figure FDA00026336343600000914
The calculation formula is as follows:
Figure FDA00026336343600000915
Figure FDA00026336343600000916
Figure FDA00026336343600000917
Figure FDA00026336343600000918
the sensor registration error estimation value solving module comprises:
each sensor node i respectively calculates the registration error estimation value eta of the sensor node ii(m)The calculation formula is as follows:
Figure FDA00026336343600000919
Figure FDA0002633634360000101
Figure FDA0002633634360000102
in the formula: all superscripts i represent sensor nodes i,
Figure FDA0002633634360000103
sensor measurement equation, partial differential, representing time k
Figure FDA0002633634360000104
Subscript η ofi=ηi(m-1)Is expressed as partial differential at etai(m-1)Value of ηi(m-1)Representing a registration error estimated value of a previous EM iteration sensor node i;
the judging module comprises: when the EM iteration is not finished, returning to execute the forward Kalman filtering module; when the EM iteration is finished, each sensor node i outputs etai(M)As an estimate of the registration error.
4. The distributed sensor network co-registration system of claim 3, wherein the EM iteration employs an M-step EM iteration, the forward Kalman filtering employs an N-step forward Kalman filtering, and the inverse Kalman filtering employs an N-step inverse Kalman filtering;
wherein M, N is a positive integer.
CN201810950034.5A 2018-08-20 2018-08-20 Distributed sensor network collaborative registration method and system Active CN109246637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810950034.5A CN109246637B (en) 2018-08-20 2018-08-20 Distributed sensor network collaborative registration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810950034.5A CN109246637B (en) 2018-08-20 2018-08-20 Distributed sensor network collaborative registration method and system

Publications (2)

Publication Number Publication Date
CN109246637A CN109246637A (en) 2019-01-18
CN109246637B true CN109246637B (en) 2020-11-06

Family

ID=65071146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810950034.5A Active CN109246637B (en) 2018-08-20 2018-08-20 Distributed sensor network collaborative registration method and system

Country Status (1)

Country Link
CN (1) CN109246637B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482607A (en) * 2009-02-19 2009-07-15 武汉理工大学 Target tracking method and device used for wireless movable sensor network
CN105676181A (en) * 2016-01-15 2016-06-15 浙江大学 Underwater moving target extended Kalman filtering tracking method based on distributed sensor energy ratios
CN106646356A (en) * 2016-11-23 2017-05-10 西安电子科技大学 Nonlinear system state estimation method based on Kalman filtering positioning
CN106685427A (en) * 2016-12-15 2017-05-17 华南理工大学 Sparse signal reconstruction method based on information consistency

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482607A (en) * 2009-02-19 2009-07-15 武汉理工大学 Target tracking method and device used for wireless movable sensor network
CN105676181A (en) * 2016-01-15 2016-06-15 浙江大学 Underwater moving target extended Kalman filtering tracking method based on distributed sensor energy ratios
CN106646356A (en) * 2016-11-23 2017-05-10 西安电子科技大学 Nonlinear system state estimation method based on Kalman filtering positioning
CN106685427A (en) * 2016-12-15 2017-05-17 华南理工大学 Sparse signal reconstruction method based on information consistency

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Consensus Nonlinear Filter With Measurement Uncertainty in Distributed Sensor Networks;Kai Shen;《IEEE SIGNAL PROCESSING LETTERS》;20170913;正文第2-4节 *
Consensus and EM based Sensor Registration in Distributed Sensor Networks;Kai Shen;《2018 21st International Conference on Information Fusion》;20180713;正文第1-4节,表I *
Distributed Variational Fiin Wireless Sensor Networks in Wireless Sensor Networksltering for Simultaneous Distributed Variational Filtering for Simultaneous;Jing Teng;《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY》;20120630;全文 *
Simultaneous target tracking and sensor location refinement in distributed sensor networks;Kai Shen;《ELSEVIER Signal Processing》;20180720;正文第2-4节 *

Also Published As

Publication number Publication date
CN109246637A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN108896047B (en) Distributed sensor network collaborative fusion and sensor position correction method
Han et al. An improved IMM algorithm based on STSRCKF for maneuvering target tracking
Chen et al. Distributed cubature information filtering based on weighted average consensus
Huang et al. On the complexity and consistency of UKF-based SLAM
CN111178385A (en) Target tracking method for robust online multi-sensor fusion
Petitti et al. Consensus-based distributed estimation for target tracking in heterogeneous sensor networks
CN110289989A (en) A kind of distributed state estimation method based on volume Kalman filtering algorithm
CN104777469A (en) Radar node selection method based on measurement error covariance matrix norm
CN111798494A (en) Maneuvering target robust tracking method under generalized correlation entropy criterion
CN115930949A (en) Multi-sensor distributed cooperative detection method and system and electronic equipment
CN109509207B (en) Method for seamless tracking of point target and extended target
CN113709662B (en) Autonomous three-dimensional inversion positioning method based on ultra-wideband
CN114139109A (en) Target tracking method, system, equipment, medium and data processing terminal
CN109246637B (en) Distributed sensor network collaborative registration method and system
CN116819975A (en) Multi-target geometric center estimation method based on pure angle observation
Yousuf Robust output-feedback formation control design for nonholonomic mobile robot (nmrs)
CN107966697B (en) Moving target tracking method based on progressive unscented Kalman
Wang et al. Estimating the position and orientation of a mobile robot using neural network framework based on combined square‐root cubature Kalman filter and simultaneous localization and mapping
CN107886058B (en) Noise-related two-stage volume Kalman filtering estimation method and system
CN115718426A (en) Event-triggered STF fault detection method for satellite attitude control system
CN116908777A (en) Multi-robot random networking collaborative navigation method based on explicit communication with tag Bernoulli
CN112333236B (en) Fault-tolerant cooperative positioning method based on two-layer filtering in three-dimensional dynamic cluster network
Messaoudi et al. Comparison of interactive multiple model particle filter and interactive multiple model unscented particle filter for tracking multiple manoeuvring targets in sensors array
CN111426322B (en) Adaptive target tracking filtering method and system for simultaneously estimating state and input
CN109474892B (en) Strong robust sensor network target tracking method based on information form

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant