CN114705223A - Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking - Google Patents

Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking Download PDF

Info

Publication number
CN114705223A
CN114705223A CN202210349774.XA CN202210349774A CN114705223A CN 114705223 A CN114705223 A CN 114705223A CN 202210349774 A CN202210349774 A CN 202210349774A CN 114705223 A CN114705223 A CN 114705223A
Authority
CN
China
Prior art keywords
target
information
agent
camera
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210349774.XA
Other languages
Chinese (zh)
Inventor
蔡云泽
郝亮
李泽仁
张贇
杜哲
李克勇
张璐华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202210349774.XA priority Critical patent/CN114705223A/en
Publication of CN114705223A publication Critical patent/CN114705223A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/183Compensation of inertial measurements, e.g. for temperature effects
    • G01C21/188Compensation of inertial measurements, e.g. for temperature effects for accumulated errors, e.g. by coupling inertial systems with absolute positioning systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Manufacturing & Machinery (AREA)
  • Navigation (AREA)

Abstract

The invention provides an inertial navigation error compensation method and system of a multi-mobile intelligent body in target tracking, which comprises the following steps: firstly, an intelligent agent with a camera acquires a target image, and the target angle measurement is obtained through image data processing; meanwhile, an intelligent agent equipped with a radar acquires target point cloud, and target distance and angle measurement is obtained through point cloud data processing; further carrying out pseudo-distance estimation of angle measurement by a least square method, and solving the problem that distance information is lacked in camera angle measurement by using cooperation of multiple intelligent agents; and finally, tracking the target by using distributed Kalman filtering, and carrying out consistency fusion on the target state. The invention can be applied to an inertial navigation multi-mobile-agent target tracking system with variable quantity and two sensor types, and realizes the compensation of multi-agent inertial navigation positioning errors and the stable tracking of targets only by depending on the tracking of an agent cluster to the targets.

Description

Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
Technical Field
The invention relates to the field of target tracking and inertial navigation, in particular to an inertial navigation error compensation method and system for a multi-mobile-agent in target tracking.
Background
In the field of target tracking, it is common for multi-agent systems to utilize cameras or radars for target tracking. Compared with the target tracking of a single intelligent agent, the target tracking of the multi-intelligent agent has the characteristics of high tracking precision and good stability, and overcomes the defects of limited target tracking observation range, single observation angle and the like of the single intelligent agent. The problem of poor visibility of a single pure angle measurement sensor such as a camera can be solved by utilizing the cooperation of multiple intelligent agents.
In an actual scene, the intelligent agent usually moves, and if errors exist in the positioning of the intelligent agent, adverse effects are caused on the target tracking precision. Under the condition that the external navigation object is difficult to position, the intelligent agent can only position by self depending on the inertial navigation device, and the inherent accumulated error of inertial navigation leads to larger and larger self positioning error. Because the sensor carries out target measurement on the intelligent agent platform, and the intelligent agent moves, the target tracking and the intelligent agent positioning are coupled in the measurement model, and therefore the positioning error caused by inertial navigation accumulation can cause the reduction of the target tracking precision. Under the condition of not increasing additional sensors, target tracking information is fused by means of target tracking of radars or cameras equipped for multiple intelligent agents and communication among intelligent agent clusters, estimation and compensation of inertial navigation errors are achieved, and the method has important practical significance for improving positioning accuracy and target tracking accuracy of the multiple intelligent agents.
Disclosure of Invention
Aiming at the problem that the self-positioning of an inertial navigation multi-agent system has accumulated errors, the invention provides an inertial navigation error compensation method used in a target tracking background, which comprises the following steps:
unifying the measurement data of the multiple agents into measurement information in the form of distance and angle;
each agent locally performs kalman filtering;
fusing the measurement information and prediction information of Kalman filtering, performing target tracking to obtain target state estimation information, and estimating an inertial navigation error compensation value;
and each intelligent agent communicates with the intelligent agents in the communication range to estimate the target state, and target tracking information is obtained through consistency fusion.
Preferably, the measurement data of the unified multi-agent is measurement information in the form of distance and angle, including:
acquiring original image data acquired by an intelligent agent with a camera, and detecting a target in the image to obtain angle information of the target relative to the camera;
acquiring point cloud collected by an intelligent agent equipped with a radar, and clustering original point cloud data to obtain distance and angle information of a target relative to the radar;
and the target information measured by the camera and the target information measured by the radar at the same moment are used as input to obtain the pseudo-distance estimation of angle measurement, and the measurement information of the intelligent agents uniformly equipped with the camera and the radar is in a distance and angle form.
Preferably, the acquiring raw image data collected by the camera-equipped agent, and detecting the target in the image to obtain the angle information of the target relative to the camera, includes:
detecting the position of a target in an image by using a target detection method of the image;
according to the relative position relation between the position and the image center point;
the angle of the target relative to the camera is obtained.
Preferably, the clustering method comprises a DBSCAN and a K-means clustering algorithm, and the distance and angle information of the measured target relative to the radar is obtained.
Preferably, the target information measured by the camera and the target information measured by the radar at the same time are used as input to obtain pseudo-range estimation of angle measurement, the measurement information of the intelligent agent uniformly equipped with the camera and the radar is in a range and angle form, the measurement information of the target needs to be communicated with other intelligent agents, the measurement information comprises at least one of the two types of measurement of the camera and the radar, and the position estimation value of the target is calculated by using a least square method.
Preferably, the pseudo-range estimation comprises:
for an agent i equipped with a camera, a linear equation set can be obtained through transformation according to the target position and the angle measurement of the camera:
Figure BDA0003579259990000021
wherein
Figure BDA0003579259990000031
Respectively representing the azimuth angle and the elevation angle of the target measured by the intelligent i camera, (x)i,yi,zi) Representing the location of agent i, (x, y, z) representing the target location;
the system of linear equations for all camera measurements is taken simultaneously as:
ACX=bC
for agent j equipped with radar, a linear system of equations is obtained based on its position and radar measurements
Figure BDA0003579259990000032
Wherein
Figure BDA0003579259990000033
Respectively representing the purposes of the agent j radar measurementsDistance, azimuth and elevation of the target relative to itself, (x)j,yj,zj) Representing the location of agent j, (x, y, z) representing the target location;
the system of equations for all radar measurements is taken simultaneously as:
ALx=bL
the system of equations for simultaneous camera and radar measurements yields:
Figure BDA0003579259990000034
solving the target position estimation value by using a least square method:
Figure BDA0003579259990000035
calculating pseudo-range measurement estimates for camera-equipped agents i to target
Figure BDA0003579259990000036
Wherein | represents the Euclidean norm, XiLocation of agent i.
Preferably, the kalman filtering comprises linear and non-linear kalman filtering methods.
Each agent communicates the target state estimation information, the fusion measurement information and the prediction information of Kalman filtering with agents within a communication range; and tracking the target to obtain target state estimation information and estimating an inertial navigation error compensation value. Taking the target state and the inertial navigation error compensation value as state variables, and the method comprises the following steps:
a. sampling: state variable estimation for time k-1
Figure BDA0003579259990000037
Sum covariance Pk-1|k-1Obtaining Sigma point set by UT conversion
Figure BDA0003579259990000038
Wherein L is the number of sampling points,
Figure BDA0003579259990000039
is the weight of the corresponding sampling point;
b. and (3) prediction process: given an equation of state f (-) the measurement equation h (-) and the process noise variance matrix Qt
Figure BDA0003579259990000041
Figure BDA0003579259990000042
Figure BDA0003579259990000043
Figure BDA0003579259990000044
Figure BDA0003579259990000045
The predicted value of the state variable at the k moment is obtained through the prediction process
Figure BDA0003579259990000046
State variable covariance Pk|k-1And prediction of the measured values
Figure BDA0003579259990000047
c. And (3) measurement updating: given a measurement noise variance matrix Rt
Figure BDA0003579259990000048
Figure BDA0003579259990000049
Figure BDA00035792599900000410
Figure BDA00035792599900000411
Figure BDA00035792599900000412
The estimated value of the state variable at the k moment is obtained through measurement updating
Figure BDA00035792599900000413
And state variable covariance Pk|k
Preferably, each agent communicates with agents within communication range with the target state estimation information, and obtains target tracking information through consistency fusion, including:
the state variable x for each agent contains a target state xAAnd inertial navigation error compensation value xBTwo parts, this step is only for the target state xAAnd carrying out consistency fusion. Each agent estimates the posterior state of the target at time k
Figure BDA00035792599900000414
Sum covariance
Figure BDA00035792599900000415
Conversion to the form of information vectors and information matrices:
Figure BDA00035792599900000416
Figure BDA00035792599900000417
agent i sends to agents in communication neighborhood
Figure BDA00035792599900000418
And carrying out consistency weighted fusion with the information vector and the information matrix transmitted by the communication neighborhood agent:
Figure BDA0003579259990000051
Figure BDA0003579259990000052
τ←τ+1
where w is the consistency weight, NiAnd tau is the set of serial numbers of agents in the neighborhood of the agent i, and tau is the consistency iteration number.
Local state estimation (target tracking information) and covariance are obtained:
Figure BDA0003579259990000053
Figure BDA0003579259990000054
according to a second aspect of the present invention, there is provided an inertial navigation error compensation system of an inertial navigation multi-mobile intelligent system in target tracking applications, comprising:
and the image data processing module is used for acquiring original image data acquired by the intelligent agent provided with the camera and detecting the target in the image to obtain the angle information of the target relative to the camera.
The system comprises a point cloud data processing module, a radar acquisition module and a data processing module, wherein the point cloud data acquired by an intelligent agent equipped with a radar is acquired, and the distance and angle information of a target relative to the radar is obtained by clustering original point cloud data;
the pseudo-distance estimation module for angle measurement takes target information measured by a camera and target information measured by a radar at the same time as input to obtain pseudo-distance estimation for angle measurement, and the measurement information of agents uniformly equipped with the camera and the radar are in a distance and angle form;
the target tracking module based on the distributed Kalman filtering takes the distance and the angle of a target as input information, and each intelligent agent locally executes the Kalman filtering; fusing prediction information of Kalman filtering and the input information, performing target tracking to obtain target state estimation information, and estimating an inertial navigation error compensation value;
and each intelligent agent only communicates the target state estimation information with the intelligent agent in the communication range, and target tracking information is obtained through consistency fusion.
Compared with the prior art, the invention has the beneficial effects that:
the method and the system can be applied to the inertial navigation error compensation method and the system of multiple mobile intelligent agents with variable quantity and two sensor types in target tracking, can fuse target tracking information of a radar and a camera, inhibit accumulation of inertial navigation errors to a certain extent, realize inertial navigation error compensation of the multiple intelligent agent system in target tracking application, and further improve target tracking precision; the method realizes the compensation of the multi-agent inertial navigation positioning error and the stable tracking of the target only by depending on the tracking of the intelligent agent cluster to the target.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 illustrates an inertial navigation error compensation method and system for a multi-mobile-agent in target tracking according to an embodiment of the present invention applied to a target tracking scenario;
FIG. 2 is a flowchart illustrating a method for inertial navigation error compensation in target tracking for a multi-mobile-agent system according to an embodiment of the present invention;
FIG. 3 is a comparison of root mean square error for multi-agent positioning according to one embodiment of the present invention;
FIG. 4 is a multi-agent target tracking root mean square error comparison of one embodiment of the present invention.
Wherein, 1 is radar, 2 is the agent, 3 is inertial measurement unit, 4 is the camera, 5 is the target.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
Referring to fig. 1, a schematic diagram of an inertial navigation error compensation method and system for a multi-mobile-agent in target tracking according to an embodiment of the present invention applied to a target tracking scenario is shown, where the schematic diagram includes a radar 1, an agent 2, an inertial measurement unit 3, a camera 4, and a target 5. Wherein for a camera equipped agent: fixing a camera 4 on an intelligent agent 2, wherein an inertia measurement unit 3 is arranged in the intelligent agent 2, and the number of the intelligent agents provided with the cameras is 2; for radar-equipped agents: fixing a radar 1 on an intelligent body 2, arranging an inertia measurement unit 3 in the intelligent body 2, wherein the inertia measurement unit can measure the speed of the intelligent body, and then calculating the current position according to the speed and time; the number of radar equipped agents is 2. All the intelligent agents track the target 5 together and move towards the target 5, no reference anchor point exists in the environment, and the moving process is only based on the inertial measurement unit for positioning. In this embodiment, the inertial navigation error compensation method of the inertial navigation multi-agent system is utilized to suppress the accumulation of the inertial navigation error to a certain extent, so as to realize the inertial navigation error compensation of the self-positioning of the multi-agent system, thereby improving the target tracking accuracy.
In a preferred embodiment, a flow chart of the method and system for compensating inertial navigation error of multiple mobile intelligent agents in target tracking applied to a target tracking scene is shown in fig. 2. The specific process is as follows:
s100 all agents track the target using respective sensors. An intelligent agent provided with a camera acquires an image of a target, and detects the position of the target in the image by image preprocessing and target detection methods so as to obtain the angle of the target relative to the camera; and acquiring point cloud of a target by an intelligent agent equipped with a radar, and clustering the point cloud to obtain target distance and angle information measured by the radar.
S200, the intelligent body equipped with the camera utilizes pseudo-distance estimation to unify the measurement into the forms of distance and angle. Specifically, the method comprises the following steps:
s201, communicating measurement information of a target with other intelligent bodies, wherein the measurement information comprises at least one of camera and radar;
for an agent i equipped with a camera, the following system of linear equations can be derived from its position and the angular measurement of the camera by transformation:
Figure BDA0003579259990000071
wherein
Figure BDA0003579259990000072
Respectively representing the azimuth angle and the elevation angle of the target measured by the intelligent i camera, (x)i,yi,zi) Indicating the location of agent i and (x, y, z) indicating the target location. The system of equations for all camera measurements is taken simultaneously as:
ACx=bC
for agent j equipped with a radar, from its position and the radar measurements, the following system of linear equations can be obtained
Figure BDA0003579259990000073
Wherein
Figure BDA0003579259990000074
Respectively represents the distance, azimuth angle and elevation angle of the target measured by the intelligent agent j radar relative to the intelligent agent (x)j,yj,zj) Indicating the location of agent j and (x, y, z) indicating the target location. The system of equations for all radar measurements is taken simultaneously as:
ALx=bL
and then simultaneously establishing an equation system of the camera and the radar measurement to obtain:
Figure BDA0003579259990000075
s202 calculates a position estimate of the target using a least squares method:
Figure BDA0003579259990000081
s203, calculating a pseudo-range measurement estimate of the target to which the camera-equipped agent i is directed
Figure BDA0003579259990000082
Wherein | represents the Euclidean norm, XiLocation of agent i.
By using the method of the embodiment, the problem of poor target tracking visibility caused by the lack of distance information in camera measurement is solved; by means of cooperation of multiple intelligent agents, measurement information is unified into a distance and angle form, and further target tracking and information fusion are facilitated.
S300, each intelligent agent locally tracks the target by using unscented Kalman filtering so as to fuse the prediction model and the measurement information of the camera or the radar. And (3) taking the target state (indicating the position and the speed of the target in the xyz three directions) and the inertial navigation error compensation value as state variables, and estimating the target state and the inertial navigation error compensation value. Specifically, the method comprises the following steps:
the unscented Kalman filtering is a nonlinear Kalman filtering method, the steps of the unscented Kalman filtering comprise a sampling process, a prediction process and an updating process, and at the moment k, the unscented Kalman filtering comprises the following steps:
a. sampling process, state variable estimation value for k-1 time
Figure BDA0003579259990000083
Sum covariance Pk-1|k-1Obtaining Sigma point set by UT conversion
Figure BDA0003579259990000084
Wherein L is the number of sampling points,
Figure BDA0003579259990000085
is the weight of the corresponding sampling point;
b. predicting the process, giving an equation of state f (-) and a measurement equation h (-) and a process noise variance matrix QtThe formula is as follows:
Figure BDA0003579259990000086
Figure BDA0003579259990000087
Figure BDA0003579259990000088
Figure BDA0003579259990000089
Figure BDA00035792599900000810
the predicted value of the state variable at the k moment is obtained through the prediction process
Figure BDA00035792599900000811
State variable covariance Pk|k-1And prediction of the measured values
Figure BDA0003579259990000091
c. Measurement update, given measurement noise variance matrix RtThe formula is as follows:
Figure BDA0003579259990000092
Figure BDA0003579259990000093
Figure BDA0003579259990000094
Figure BDA0003579259990000095
Figure BDA0003579259990000096
an estimation value of a state variable at the moment k is obtained by using an unscented Kalman filtering method
Figure BDA0003579259990000097
Sum covariance Pk|k
In the embodiment, the target state and the inertial navigation error compensation value are jointly used as state variables of Kalman filtering, and the target state and the inertial navigation error compensation value have correlation in the system; the inertial navigation error compensation value is introduced and estimated in the state variable, which means that the target state is estimated more accurately.
S400, the intelligent agent and the intelligent agent in the communication range communicate the target state information, and the target state information is subjected to consistent fusion in a distributed mode, so that the cooperative target tracking of the cluster is realized. Specifically, the method comprises the following steps:
s401, each agent estimates the posterior state of the target at the k moment
Figure BDA0003579259990000098
Sum covariance
Figure BDA0003579259990000099
Converting into the form of information vector and information matrix, taking agent i as an example:
Figure BDA00035792599900000910
Figure BDA00035792599900000911
s402 agent i sends to agents in communication neighborhood
Figure BDA00035792599900000912
And carrying out consistency weighted fusion with the information vector and the information matrix transmitted by the communication neighborhood intelligent agent:
Figure BDA00035792599900000913
Figure BDA00035792599900000914
τ←τ+1
where w is the consistency weight, NiAnd tau is a set of serial numbers of agents in the neighborhood of the agent i, and tau is the consistency iteration number.
S403 obtains local state estimates and covariance:
Figure BDA0003579259990000101
Figure BDA0003579259990000102
the invention provides a preferred embodiment to verify the effectiveness of the embodiment provided by the invention and evaluate the effect.
The evaluation index of the positioning of the intelligent agent is the mean positioning root mean square error of all the intelligent agents, namely:
Figure BDA0003579259990000103
wherein
Figure BDA0003579259990000104
Representing the true value of the n position of the intelligent body in the mth Monte Carlo simulation, wherein the true value is generated by a simulation program,
Figure BDA0003579259990000105
to represent
Figure BDA0003579259990000106
M denotes the total number of monte carlo simulations and N denotes the total number of agents.
The evaluation index of target tracking is root mean square error, namely:
Figure BDA0003579259990000107
wherein
Figure BDA0003579259990000108
Representing the true value of the target position in the mth Monte Carlo simulation, wherein the true value is generated by a simulation program,
Figure BDA0003579259990000109
to represent
Figure BDA00035792599900001010
M denotes the total of Monte Carlo simulationsThe number of times.
In an embodiment of the present invention, 100 monte carlo simulation experiments are performed, and a positioning root mean square error pair of a mobile agent cluster is shown in fig. 3, and a target tracking root mean square error pair of a mobile agent cluster is shown in fig. 4. The result shows that, in the context of this embodiment, the present invention can effectively reduce the positioning root mean square error and the target tracking root mean square error of the mobile agent cluster.
The inertial navigation error compensation method for the multi-agent system in the target tracking application of the embodiment of the invention can improve the positioning precision of the inertial navigation agent cluster, further improve the target tracking precision, and has application prospect in many practical scenes.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (10)

1. An inertial navigation error compensation method for a multi-mobile-agent in target tracking is characterized by comprising the following steps:
unifying the measurement data of the multiple agents into measurement information in the form of distance and angle;
each agent locally performs kalman filtering;
fusing the measurement information and prediction information of Kalman filtering, performing target tracking to obtain target state estimation information, and estimating an inertial navigation error compensation value;
and each intelligent agent communicates with the intelligent agents in the communication range to estimate the target state, and target tracking information is obtained through consistency fusion.
2. The method of inertial navigation error compensation for multi-agent in target tracking as claimed in claim 1, wherein the measurement data of the unified multi-agent is measurement information in the form of distance and angle, comprising:
acquiring an original image acquired by an intelligent agent with a camera, and detecting a target in the image to obtain angle information of the target relative to the camera;
acquiring point cloud collected by an intelligent agent equipped with a radar, and clustering original point cloud data to obtain distance and angle information of a target relative to the radar;
and the target information measured by the camera and the target information measured by the radar at the same moment are used as input to obtain the pseudo-distance estimation of angle measurement, and the measurement information of the intelligent agents uniformly equipped with the camera and the radar is in a distance and angle form.
3. The method for compensating inertial navigation error of a multi-mobile-agent in target tracking according to claim 2, wherein the obtaining an original image collected by the agent equipped with a camera, and detecting a target in the image to obtain angle information of the target relative to the camera, comprises:
detecting the position of a target in an original image by using a target detection method of the image;
according to the relative position relation between the position and the central point of the original image;
the angle of the target relative to the camera is obtained.
4. The method of claim 2, wherein the clustering method comprises DBSCAN and K-means clustering algorithms to obtain distance and angle information of the measured target with respect to the radar.
5. The method of claim 2, wherein the target information measured by the camera and the target information measured by the radar at the same time are input to obtain a pseudo-range estimation of angle measurement, the measurement information of the intelligent agent uniformly equipped with the camera and the radar is in a range and angle form, the measurement information of the target needs to be communicated with other intelligent agents, and the measurement information includes at least one of the two types of measurement of the camera and the radar, and a least square method is used to calculate the position estimation value of the target.
6. The method of claim 5, wherein the pseudo-range estimation comprises:
for an agent i equipped with a camera, a linear equation set is obtained through transformation according to the target position and the angle measurement of the camera:
Figure FDA0003579259980000021
wherein
Figure FDA0003579259980000022
Respectively representing the azimuth angle and the elevation angle of the target measured by the intelligent i camera, (x)i,yi,zi) Representing the location of agent i, (x, y, z) representing the target location;
the system of linear equations for all camera measurements is taken simultaneously as:
ACX=bC
for agent j equipped with radar, a linear system of equations is obtained based on its position and radar measurements
Figure FDA0003579259980000023
Wherein
Figure FDA0003579259980000024
Respectively represents the distance, azimuth angle and elevation angle of the target measured by the intelligent agent j radar relative to the intelligent agent (x)j,yj,zj) Representing the location of agent j, (x, y, z) representing the target location;
the system of equations for all radar measurements is taken simultaneously as:
ALX=bL
the system of equations for simultaneous camera and radar measurements yields:
Figure FDA0003579259980000025
solving the target position estimation value by using a least square method:
Figure FDA0003579259980000026
calculating pseudo-range measurement estimates for camera-equipped agents i to target
Figure FDA0003579259980000027
Wherein | represents the Euclidean norm, XiLocation of agent i.
7. The method of claim 1, wherein the Kalman filtering comprises linear and non-linear Kalman filtering methods.
8. The method of claim 7, wherein the fusing the measurement information and the prediction information of Kalman filtering; carrying out target tracking to obtain target state estimation information and estimating an inertial navigation error compensation value; the Kalman filtering selects nonlinear Kalman filtering (unscented Kalman filtering), takes a target state and an inertial navigation error compensation value as state variables, and comprises the following steps:
sampling: state variable estimation for time k-1
Figure FDA0003579259980000031
Sum covariance Pk-1|k-1Obtaining Sigma point set by UT conversion
Figure FDA0003579259980000032
Wherein L is the number of sampling points,
Figure FDA0003579259980000033
is the weight of the corresponding sampling point;
and (3) prediction process: given an equation of state f (-) the measurement equation h (-) and the process noise variance matrix Qt
Figure FDA0003579259980000034
Figure FDA0003579259980000035
Figure FDA0003579259980000036
Figure FDA0003579259980000037
Figure FDA0003579259980000038
Obtaining a predicted value of the state variable at time k through the prediction process
Figure FDA0003579259980000039
State variable covariance Pk|k-1And prediction of the measured value
Figure FDA00035792599800000310
And (3) measurement updating: given a measurement noise variance matrix Rt
Figure FDA00035792599800000311
Figure FDA00035792599800000312
Figure FDA00035792599800000313
Figure FDA00035792599800000314
Figure FDA0003579259980000041
Obtaining an estimated value of the state variable at time k by the measurement update
Figure FDA0003579259980000042
And state variable covariance Pk|k
9. The method of claim 1, wherein each agent communicates with agents within communication range to obtain target state estimation information through consistency fusion, and the method comprises:
the state variable x for each agent contains a target state xAAnd inertial navigation error compensation value xBTwo parts, for the target state xACarrying out consistency fusion;
each agent estimates the posterior state of the target at time k
Figure FDA0003579259980000043
Sum covariance
Figure FDA0003579259980000044
Conversion to the form of information vectors and information matrices:
Figure FDA0003579259980000045
Figure FDA0003579259980000046
agent i sends to agents in communication neighborhood
Figure FDA0003579259980000047
And carrying out consistency weighted fusion with the information vector and the information matrix transmitted by the communication neighborhood agent:
Figure FDA0003579259980000048
Figure FDA0003579259980000049
τ←τ+1
where w is the consistency weight, NiThe method comprises the steps that a serial number set of agents in the neighborhood of an agent i is obtained, and tau is a consistency iteration number;
local state estimates (target tracking information) and covariance are obtained:
Figure FDA00035792599800000410
Figure FDA00035792599800000411
10. an inertial navigation error compensation system for a multi-mobile-agent in target tracking, comprising:
and the image data processing module is used for acquiring original image data acquired by the intelligent agent provided with the camera and detecting the target in the image to obtain the angle information of the target relative to the camera.
The system comprises a point cloud data processing module, a radar acquisition module and a data processing module, wherein the point cloud data acquired by an intelligent agent equipped with a radar is acquired, and the distance and angle information of a target relative to the radar is obtained by clustering original point cloud data;
the pseudo-distance estimation module for angle measurement takes target information measured by a camera and target information measured by a radar at the same time as input to obtain pseudo-distance estimation for angle measurement, and the measurement information of agents uniformly equipped with the camera and the radar are in a distance and angle form;
the target tracking module based on the distributed Kalman filtering takes the distance and the angle of a target as input information, and each intelligent agent locally executes the Kalman filtering; fusing prediction information of Kalman filtering and the input information, tracking a target to obtain target state estimation information, and estimating an inertial navigation error compensation value;
and each intelligent agent only communicates the target state estimation information with the intelligent agent in the communication range, and target tracking information is obtained through consistency fusion.
CN202210349774.XA 2022-04-02 2022-04-02 Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking Pending CN114705223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210349774.XA CN114705223A (en) 2022-04-02 2022-04-02 Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210349774.XA CN114705223A (en) 2022-04-02 2022-04-02 Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking

Publications (1)

Publication Number Publication Date
CN114705223A true CN114705223A (en) 2022-07-05

Family

ID=82172033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210349774.XA Pending CN114705223A (en) 2022-04-02 2022-04-02 Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking

Country Status (1)

Country Link
CN (1) CN114705223A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379560A (en) * 2022-08-22 2022-11-22 昆明理工大学 Target positioning and tracking method only under distance measurement information in wireless sensor network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379560A (en) * 2022-08-22 2022-11-22 昆明理工大学 Target positioning and tracking method only under distance measurement information in wireless sensor network
CN115379560B (en) * 2022-08-22 2024-03-08 昆明理工大学 Target positioning and tracking method in wireless sensor network under condition of only distance measurement information

Similar Documents

Publication Publication Date Title
CN107255795B (en) Indoor mobile robot positioning method and device based on EKF/EFIR hybrid filtering
US10247556B2 (en) Method for processing feature measurements in vision-aided inertial navigation
CN110375730A (en) The indoor positioning navigation system merged based on IMU and UWB
CN106772524B (en) A kind of agricultural robot integrated navigation information fusion method based on order filtering
CN113074739A (en) UWB/INS fusion positioning method based on dynamic robust volume Kalman
CN110398258B (en) Performance testing device and method of inertial navigation system
CN109059907A (en) Track data processing method, device, computer equipment and storage medium
CN109631894A (en) A kind of monocular vision inertia close coupling method based on sliding window
CN114035187B (en) Perception fusion method of automatic driving system
CN113960622A (en) Real-time positioning method and device fusing laser radar and IMU sensor information
CN112068168B (en) Geological disaster unknown environment integrated navigation method based on visual error compensation
CN110738275A (en) UT-PHD-based multi-sensor sequential fusion tracking method
CN114111776A (en) Positioning method and related device
CN118129733A (en) Mobile robot positioning method under typical laser radar degradation scene
CN115435782A (en) Anti-interference position estimation method and device under multi-source information constraint
CN108519595A (en) Joint multisensor is registrated and multi-object tracking method
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
CN110736459B (en) Angular deformation measurement error evaluation method for inertial quantity matching alignment
CN113029173A (en) Vehicle navigation method and device
CN111811421A (en) High-speed real-time deformation monitoring method and system
CN116972834A (en) Multi-sensor fusion positioning method, device, system and storage medium
TWI764842B (en) Ranging-type positioning system and method based on crowdsourced calibration
CN114648062B (en) Multi-target matching method for measured data of active and passive sensors
CN116929350B (en) Rapid temporary reconstruction collaborative navigation system and method based on data link ranging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination