CN113433828B - Multi-robot dynamic target tracking cooperative control method - Google Patents

Multi-robot dynamic target tracking cooperative control method Download PDF

Info

Publication number
CN113433828B
CN113433828B CN202110980698.8A CN202110980698A CN113433828B CN 113433828 B CN113433828 B CN 113433828B CN 202110980698 A CN202110980698 A CN 202110980698A CN 113433828 B CN113433828 B CN 113433828B
Authority
CN
China
Prior art keywords
robot
target
robots
dynamic target
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110980698.8A
Other languages
Chinese (zh)
Other versions
CN113433828A (en
Inventor
于丹
王谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110980698.8A priority Critical patent/CN113433828B/en
Publication of CN113433828A publication Critical patent/CN113433828A/en
Application granted granted Critical
Publication of CN113433828B publication Critical patent/CN113433828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a multi-robot dynamic target tracking cooperative control method, which comprises the following steps: step 1, establishing a generalized model of a target tracking system of a moving target, wherein the generalized model comprises a kinematic model of the moving target and an observation model of the target tracking system; step 2, estimating the target state information at the current moment by adopting an extended Kalman filtering algorithm according to the measurement information of the mobile robot, performing data fusion on the state estimation of the targets by a plurality of robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target; step 3, solving the minimum perception quality on the premise of meeting the coverage quality based on an approximate greedy algorithm; and 4, planning the moving path of the multiple robots at the next moment to realize the cooperative tracking of the multiple robots. The invention can realize real-time tracking of dynamic targets by multiple robots, greatly reduces the calculated amount and ensures the tracking precision compared with the traditional algorithm.

Description

Multi-robot dynamic target tracking cooperative control method
Technical Field
The invention relates to the technical field of multi-robot control systems, in particular to a multi-robot dynamic target tracking cooperative control method.
Background
The target tracking technology is an important research direction in the robot field, and the robot dynamic target tracking technology is widely applied in the military field and in the daily life of people. In military affairs, the method can be used for positioning, tracking and attacking enemy targets in battlefield monitoring and interception tasks. Only if the movement track of the enemy target is accurately obtained, the subsequent interception and attack tasks can be executed. In the civil field, service robots are more and more intelligent, intelligent mobile robots are used in more and more public places to provide services for people, such as intelligent robots in bank halls, and the key link of the intellectualization of the service robots is the target recognition and tracking of the robots. Dynamic target tracking is a hot problem in the field of robots at present, and more foreign and domestic scholars are dedicated to research on multi-robot dynamic target tracking technology and obtain some remarkable results.
A common method in multi-robot dynamic target tracking is a model-based prediction method. In order to avoid monitoring blind spots in a monitoring area and simultaneously consider the tracking accuracy of a dynamic target, the problem of multi-robot cooperative control can be modeled into an optimal control problem, namely, an optimal multi-robot control strategy is searched under the condition of meeting the constraint condition, so that the tracking accuracy is optimal. This problem is considered to be an NP-hard problem, i.e. the number of computations to find the optimal solution to the problem will grow exponentially as the number of robots increases. Even if the traditional greedy algorithm is used, local optimal solutions are searched by predicting the state of the target at the next moment at each moment, and the problems that the calculation amount is huge and real-time planning cannot be achieved are still faced.
Disclosure of Invention
In order to overcome the defects of the traditional estimation method for tracking the dynamic target of multiple robots, the invention provides a cooperative control method for tracking the dynamic target of multiple robots, which can realize the real-time tracking of the dynamic target by multiple robots, and can ensure the minimum perception quality while meeting the requirement of coverage quality.
The technical scheme of the invention is as follows: a multi-robot dynamic target tracking cooperative control method comprises a dynamic target G and n robots for observing the dynamic target G, and is characterized in that: further comprising the steps of:
step 1, establishing a generalized model of a target tracking system of a dynamic target, wherein the generalized model comprises a kinematic model of the dynamic target and an observation model of the target tracking system; and (3) judging whether the dynamic target can be detected or not through a sensor carried by the mobile robot, if so, entering the step 2, and otherwise, entering the step 3.
And 2, measuring the distance between each robot and the dynamic target through an observation model of the target tracking system, estimating the state and the error covariance matrix of the target at the current moment, performing data fusion on the state estimation of the target by the n robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
And 3, solving the minimum perception quality under the condition of meeting the coverage quality based on an approximate greedy algorithm.
And 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and moving the robot to realize the tracking of the dynamic target G.
Preferably, the dynamic target kinematics model in step 1 is a CT model, and is described as X (k + 1) = AX (k) + W (k), wherein X is a state variable, A is a state transition matrix, W is system noise, k is an arbitrary time, and k is greater than or equal to 0.
Preferably, step 1 is carried outThe observation model of the standard tracking system is as follows: each robot carries an ultrasonic sensor capable of measuring the distance between the robot and the target, described as: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiIs the relative distance of the dynamic target from robot i:
Figure 752310DEST_PATH_IMAGE001
wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViFor the measurement noise of the robot itself, X is a state variable, and the index i indicates the ith robot, i =1,2, … n.
Preferably, step 2 is to obtain the state prediction and the error covariance prediction by an extended kalman filter algorithm based on the kinematic model, so as to update the state and error covariance matrix of the target current time.
Preferably, the method of weighted averaging is: and giving the same weight to the estimated values obtained by all the robots observing the target, namely, averaging the estimated values of all the robots to be used as the estimation of the target state, performing data fusion on the state estimation of the target by a plurality of robots to be used as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
Preferably, step 3 specifically comprises: step 3.1, defining the perceived quality JsenseDeterminant J of error covariance matrix P after fusionsense= det (p); step 3.2, defining the coverage quality as
Figure 902931DEST_PATH_IMAGE002
,AcovIs the area of the area covered by at least one robot, AfoiIs the total area of the monitoring range; step 3.3, defining a function I for judging the detection range of the robot IiFunction I when robot I is able to observeiIs 1; function I when robot I cannot observeiAt 0, further determining the coverage quality J by a double integral area determination methodcovAnd detection rangeFunction IiThe coverage quality is solved by the expression between the two; and 3.4, solving the minimum perception quality on the premise of meeting the coverage quality according to an approximate greedy algorithm.
Preferably, step 3.4 is performed on the coverage quality JCOVIs required to be JCOV≥0.7。
Preferably, the approximate greedy algorithm in step 3.4 is an optimal control strategy for planning each robot one by one at any time, and the minimum perception quality is obtained through a traversal search method.
Preferably, the greedy approximation algorithm in step 3.4 specifically includes: assuming that the 1 st robot moves, fixing the positions of the other n-1 robots, and solving all possible perceived masses J meeting the coverage quality requirementsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; keeping the 1 st robot in the optimal position, assuming the 2 nd robot to move, keeping the other n-2 robots in the optimal position, and solving all possible perception qualities JsenseSelecting the position of the 2 nd robot corresponding to the optimal perception quality; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the rest n-2 robots, and when the robots are traversed to the nth robot, obtaining the minimum perception quality of the n robot groups.
Has the advantages that:
(1) the invention can realize the real-time tracking of the dynamic target by multiple robots by predicting the dynamic target and solving the optimal perception quality;
(2) the method carries out one-step prediction on the dynamic target by the approximate greedy algorithm, only needs 5n times of calculation at any time, and is compared with the 5 needed by the traditional traversal search methodnThe secondary calculation amount greatly reduces the calculation amount and improves the calculation speed; the target state error is within a confidence interval, so that the tracking precision is ensured;
(3) according to the invention, the optimal control strategy of each robot is planned one by one at any time through an approximate greedy algorithm, so that real-time planning can be realized, and the real-time performance of robot tracking is improved.
Drawings
FIG. 1 is a flowchart of a multi-robot dynamic target tracking cooperative control method according to an embodiment of the present invention;
FIG. 2 is an initial distribution diagram of a target real motion trajectory and a robot according to an embodiment of the present invention;
FIG. 3 is a comparison of the operation of the target real trajectory and the predicted trajectory in accordance with an embodiment of the present invention;
FIG. 4 is a comparison of a target real trajectory and a predicted trajectory in accordance with one embodiment of the present invention;
FIG. 5 is a diagram of the X-direction position error and
Figure 47474DEST_PATH_IMAGE003
a confidence interval map;
FIG. 6 is a graph of X-direction velocity error and
Figure 207322DEST_PATH_IMAGE003
a confidence interval map;
FIG. 7 shows a Y-direction position error of an embodiment of the present invention
Figure 419123DEST_PATH_IMAGE004
A confidence interval map;
FIG. 8 is a graph of Y-direction velocity error and
Figure 708022DEST_PATH_IMAGE003
a confidence interval map;
fig. 9 is a graph of coverage quality data at each time instant in accordance with one embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a novel multi-robot dynamic target tracking cooperative control method, which comprises a dynamic target G and n robots for observing the dynamic target G, and as shown in figure 1, the method comprises the following steps:
step 1, establishing a generalized model of a target tracking system of a dynamic target, wherein the generalized model comprises a kinematic model of the dynamic target and an observation model of the target tracking system. And (3) judging whether the dynamic target is in the observation range of the robot, if so, entering the step 2, and otherwise, entering the step 3.
The dynamic target kinematics model adopts a CT model, namely a uniform turning motion model, and is described as X (k + 1) = AX (k) + W (k), wherein k is any time, k is more than or equal to 0, X is a state variable,
Figure 915361DEST_PATH_IMAGE005
x and y represent the position of the dynamic object in a cartesian coordinate system,
Figure 846276DEST_PATH_IMAGE006
and
Figure 381425DEST_PATH_IMAGE007
representing the speed of the dynamic target in the x-direction and the y-direction; a is a state transition matrix, W is system noise, a system noise sequence is assumed to be Gaussian white noise with a mean value of 0 and a covariance matrix of Q, and the state transition matrix A is as follows:
Figure 405007DEST_PATH_IMAGE008
where w represents the turning angular velocity and T is the sampling time.
The observation model of the target tracking system is as follows: each robot carries an ultrasonic sensor which can measure the distance between the robot and a target, and the measurement model of the target tracking system is as follows: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiIs the relative distance of the dynamic target from robot i:
Figure 87923DEST_PATH_IMAGE009
wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViThe noise is measured by the robot, the mean value of the measured noise is assumed to be 0, and the covariance matrix is Gaussian white noise of R; x is a state variable, and subscript i denotes the ith robot, i =1,2, … n. As shown in fig. 2, the initial distribution diagram of the target real motion trajectory and the robot is shown.
Step 2, measuring the distance between each robot and the dynamic target through an observation model of a target tracking system, estimating the state and the error covariance matrix of the target at the current moment through an extended Kalman filtering algorithm, performing data fusion on the state estimation of the targets by the n robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G, wherein the method specifically comprises the following steps:
step 2.1, updating the state X of the target at the current moment by using an extended Kalman filter algorithmi(k | k) and the error covariance matrix Pi(k | k), specifically:
the state at any time k is predicted as
Figure 822530DEST_PATH_IMAGE010
(ii) a The measurement is predicted to
Figure 8923DEST_PATH_IMAGE011
The observation equation is linearized, and the Jacobian matrix is solved as follows:
Figure 468985DEST_PATH_IMAGE012
and (3) covariance prediction:
Figure 809837DEST_PATH_IMAGE013
wherein Q is a covariance matrix of the system noise W, and A is a state transition matrix;
by covariance predictioniJacobian matrix HiSolving a Kalman gain K by Gaussian white noise R:
Figure 380758DEST_PATH_IMAGE014
(ii) a And substituting the Kalman gain into the state update for solving the state variable X:
Figure 421657DEST_PATH_IMAGE015
and covariance update:
Figure 19998DEST_PATH_IMAGE016
step 2.2: a weighted average method in data fusion is adopted, namely estimated values obtained by all robots observing the target are endowed with weight wiAnd performing data fusion, and further predicting the dynamic target G by using the data fusion information. The method adopts the same weight for the estimated values of all the robots, namely, the average value of the estimated values of all the robots is taken as the optimal estimation of the target state at the current moment.
And 3, solving the minimum perception quality on the premise of meeting the coverage quality based on an approximate greedy algorithm.
Step 3.1, the perception quality is defined as a determinant of the fused error covariance matrix P, and the formula is as follows: j. the design is a squaresense=det(P)。
And 3.2, when designing a network scheduling algorithm of the mobile robot, comprehensively considering the coverage quality and the perception quality index, optimizing the perception quality and planning the moving path of the robot under the condition of meeting the coverage quality. The quality of coverage is defined as
Figure 603471DEST_PATH_IMAGE017
,AcovIs the area covered by at least one robot, the covered area range is related to the position of the mobile robot; as shown in FIG. 9, a graph of coverage quality data at each time, AfoiRepresenting the total area of the monitored range.
Step 3.3, assuming that the detection range of the robot i is riDefining a decision function Ii
Figure 978082DEST_PATH_IMAGE018
Which isIn (1),
Figure 139068DEST_PATH_IMAGE019
if the relative distance between a point in the monitoring area and the robot i is 1, the robot i can observe the point (x, y), and otherwise, the result is 0.
Therefore, based on the current positions of all robots, the coverage quality J in the current stateCOVCan be expressed as:
Figure DEST_PATH_IMAGE020
wherein i =1, 2.. n denotes the i-th robot, afoiIs the total area of the monitoring range.
And 3.4, solving the minimum value of the predicted target perception quality at the next moment according to an approximate greedy algorithm under the condition that the constraint that the coverage quality is 0.7 is met. The method specifically comprises the following steps: it is assumed that all robots can move or keep still at the same speed in four directions, up, down, left and right, at any time, and therefore, at any time, there are five possible control strategies for each robot. In this embodiment, 9 robots are selected, and according to the conventional traversal search method, the control strategy of 5 robots at any time is 5 for 9 robots9About 195 thousands of combinations, the calculation amount is huge, and real-time planning cannot be realized. Therefore, an approximate method is adopted to plan the optimal control strategy of each robot one by one at any time, specifically, the positions of the rest 8 robots are firstly fixed, the first robot is assumed to move, and all possible perception qualities J meeting the coverage quality requirement are solvedsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; at this time, the first robot is kept still at the optimal position, the 2 nd robot is assumed to move, the other 7 robots are kept still, each possible perception quality is solved, and the position of the 2 nd robot corresponding to the optimal perception quality is selected; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the remaining 7 robots, and finally obtaining the minimum perception quality. By the method, 5 x 9 times, namely 45 times of calculation is required at any time, and the calculation amount is greatly reduced.
And 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and realizing the cooperative tracking of the multiple robots on the dynamic target G by the mobile robot.
The steps are iterative operation steps, and each time is iterated once.
The embodiment of the invention is implemented as follows: the simulation experiment is carried out by using Matlab 2019b, wherein the monitoring range is a square area of 120 meters by 120 meters, 9 mobile robots are uniformly distributed at the initial moment, the observation range of each robot is a circular observation area with the radius of 20 meters, and the moving speed is 1 m/s. The initial state of the dynamic target is an initial position (90, 0) and initial velocity (120 m/s, 230 m/s), and the uniform turning motion is performed at a velocity of w =5 rad/s. The total observation time is 50s, the sampling time is 1s, and the minimum coverage quality requirement is 0.7. The experimental procedure was as follows: step 1, establishing a generalized model of a target tracking system. And 2, estimating the current state of the target, namely the position and the speed of the target according to the measurement data obtained by all the robots capable of detecting the dynamic target at the current moment, and predicting the position and the speed of the target at the next moment. And 3, sequentially selecting an optimal moving strategy of each robot according to the prediction of the next moment of the target, so that multiple robots can perform optimal state estimation on the target at the next moment under the constraint condition that the coverage quality is 0.7. And 4, moving the multiple robots according to the result of the step 3, and returning to the step 1.
Fig. 2 shows the real trajectory of the target and the initial distribution of the robots, and it can be seen that only 1 robot can observe the moving target at the initial time. Fig. 3 is a diagram showing a comparison between a target real track and a predicted track at half the time of dynamic target tracking, and fig. 4 is a diagram showing a comparison between a target real track and a predicted track in the whole tracking process. The real target track and the predicted track are highly overlapped, and the feasibility and the high goodness of fit of the method are verified. Meanwhile, the requirement of coverage quality is met, so that the situation that all robots are concentrated near the target does not occur, and the coverage quality parameters can be adjustedAnd different task requirements are met. Fig. 5-8 show the error in the estimation of the target state. Defining four state variable errors Ex、Evx、Ey、EvyRespectively representing the position error of the state variable in the x and y directions and the speed error in the x and y directions,
Figure 331146DEST_PATH_IMAGE021
wherein X isekfX respectively represents an estimated value and a true value of the position of the target in the X direction;
Figure 413633DEST_PATH_IMAGE022
wherein
Figure 621630DEST_PATH_IMAGE023
Figure 840384DEST_PATH_IMAGE024
Respectively representing an estimated value and a true value representing the velocity of the target in the x direction;
Figure 281992DEST_PATH_IMAGE025
wherein Y isekfY respectively represents an estimated value and a true value of the position of the target in the Y direction;
Figure 84731DEST_PATH_IMAGE026
wherein
Figure 535567DEST_PATH_IMAGE027
Y represents the estimated and true values of the velocity of the target in the Y direction, respectively. Representing the error between the target actual state and the estimated state by four state variable errors
Figure 620546DEST_PATH_IMAGE028
The confidence interval is the situation when the offset is not considered. From fig. 5 to fig. 8, it can be seen that the errors of the four state variables are all within the range of the confidence interval, and it is verified that the dynamic target can be accurately tracked by adopting the method.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A multi-robot dynamic target tracking cooperative control method comprises a dynamic target G and n robots for observing the dynamic target G, wherein n is the number of the robots and n is more than or equal to 2, and the method is characterized in that: further comprising the steps of:
step 1, establishing a generalized model of a target tracking system of a dynamic target, wherein the generalized model comprises a kinematic model of the dynamic target and an observation model of the target tracking system; detecting a dynamic target through a sensor carried by the mobile robot;
step 2, measuring the distance between each robot and a dynamic target through an observation model of a target tracking system, estimating the state and the error covariance matrix of the target at the current moment through an extended Kalman filter algorithm based on the measurement data of the robot detecting the target, performing data fusion on the state estimation of the target by the robot detecting the target by using a weighted average method to obtain the fused state and the error covariance matrix as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G;
step 3, solving the minimum perception quality of the n robots based on an approximate greedy algorithm under the condition of meeting the coverage quality, specifically: step 3.1, defining the perceived quality JsenseDeterminant J of error covariance matrix P after fusionsense=det(P);
Step 3.2, defining the coverage quality as
Figure 45269DEST_PATH_IMAGE001
,AcovIs the area of the area covered by at least one robot, AfoiIs the total area of the monitoring range;
step 3.3, defining a function I for judging the detection range of the robot IiFunction I when robot I is able to observeiIs 1; function I when robot I cannot observeiAt 0, further determining the coverage quality J by a double integral area determination methodcovAnd a detection range function IiThe coverage quality is solved by the expression between the two;
step 3.4, under the premise of meeting the coverage quality, planning the optimal control strategy of each robot one by one at any time according to an approximate greedy algorithm, and obtaining the minimum perception quality through a traversal search method;
and 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and moving the robot to realize the tracking of the dynamic target G.
2. The multi-robot dynamic target tracking cooperative control method according to claim 1, characterized in that: the dynamic target kinematics model in the step 1 is a CT model, and is described as X (k + 1) = ax (k) + W (k), where X is a state variable, a is a state transition matrix, W is system noise, k is any time, and k is greater than or equal to 0.
3. The multi-robot dynamic target tracking cooperative control method according to claim 2, characterized in that: the observation model of the target tracking system in the step 1 is as follows: each robot carries an ultrasonic sensor capable of measuring the distance between the robot and the target, described as: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiH is the relative distance between the dynamic target and the robot ii=
Figure 244300DEST_PATH_IMAGE002
Wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViFor the robot to moveThe measurement noise of the body, X is a state variable, and the index i indicates the ith robot, i =1,2, … n.
4. The multi-robot dynamic target tracking cooperative control method according to claim 3, characterized in that: the weighted average method comprises the following steps: and giving the same weight to the estimated values obtained by all the robots observing the target, namely, averaging the estimated values of all the robots to be used as the estimation of the target state, performing data fusion on the state estimation of the target by a plurality of robots to be used as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
5. The multi-robot dynamic target tracking cooperative control method according to claim 4, characterized in that: the coverage quality J in the step 3.4COVIs required to be JCOV≥0.7。
6. The multi-robot dynamic target tracking cooperative control method according to claim 1 or 5, characterized in that: the greedy approximation algorithm in the step 3.4 specifically comprises: assuming that the 1 st robot moves, fixing the positions of the other n-1 robots, and solving all possible perceived masses J meeting the coverage quality requirementsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; keeping the 1 st robot in the optimal position, assuming the 2 nd robot to move, keeping the other n-2 robots in the optimal position, and solving all possible perception qualities JsenseSelecting the position of the 2 nd robot corresponding to the optimal perception quality; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the rest n-2 robots, and when the robots are traversed to the nth robot, obtaining the minimum perception quality of the n robot groups.
CN202110980698.8A 2021-08-25 2021-08-25 Multi-robot dynamic target tracking cooperative control method Active CN113433828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110980698.8A CN113433828B (en) 2021-08-25 2021-08-25 Multi-robot dynamic target tracking cooperative control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110980698.8A CN113433828B (en) 2021-08-25 2021-08-25 Multi-robot dynamic target tracking cooperative control method

Publications (2)

Publication Number Publication Date
CN113433828A CN113433828A (en) 2021-09-24
CN113433828B true CN113433828B (en) 2022-01-18

Family

ID=77797840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110980698.8A Active CN113433828B (en) 2021-08-25 2021-08-25 Multi-robot dynamic target tracking cooperative control method

Country Status (1)

Country Link
CN (1) CN113433828B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114200833B (en) * 2021-11-24 2024-04-12 华中科技大学 Control method for dynamic area coverage of robot network based on observer
CN116872221B (en) * 2023-09-08 2023-12-05 湖南大学 Data driving bipartite uniform control method for multi-machine cooperative rotation large-sized workpiece

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002489A1 (en) * 2007-06-29 2009-01-01 Fuji Xerox Co., Ltd. Efficient tracking multiple objects through occlusion
CN101630413B (en) * 2009-08-14 2012-01-25 浙江大学 Multi-robot tracked mobile target algorithm
CN102647726A (en) * 2012-02-17 2012-08-22 无锡英臻科技有限公司 Balancing optimizing strategy for energy consumption of coverage of wireless sensor network
CN107659989B (en) * 2017-10-24 2020-08-04 东南大学 Distributed measurement dormancy and target tracking method for wireless sensor network nodes
CN107703970B (en) * 2017-11-03 2018-08-21 中国人民解放军陆军工程大学 Unmanned plane cluster is around method for tracing
CN109040969A (en) * 2018-08-10 2018-12-18 武汉科技大学 Intelligent Robotic Car optimal acquisition point position selecting method under indoor environment
CN109298725B (en) * 2018-11-29 2021-06-15 重庆大学 Distributed multi-target tracking method for group robots based on PHD filtering
CN111931384B (en) * 2020-09-01 2021-05-14 中国人民解放军国防科技大学 Group cooperative trapping method based on antenna model and storage medium

Also Published As

Publication number Publication date
CN113433828A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN113433828B (en) Multi-robot dynamic target tracking cooperative control method
Moon et al. Deep reinforcement learning multi-UAV trajectory control for target tracking
CN109597864B (en) Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering
CN108594834A (en) One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known
CN111913484B (en) Path planning method of transformer substation inspection robot in unknown environment
CN112881979A (en) Initial state self-adaptive fusion positioning method based on EKF filtering
CN114137562B (en) Multi-target tracking method based on improved global nearest neighbor
CN116953692A (en) Track association method under cooperative tracking of active radar and passive radar
Han et al. A multi-platform cooperative localization method based on dead reckoning and particle filtering
CN114545968B (en) Unmanned aerial vehicle cluster multi-target tracking track planning method based on pure azimuth positioning
CN112800889B (en) Target tracking method based on distributed matrix weighted fusion Gaussian filtering
CN113534164B (en) Target path tracking method based on active-passive combined sonar array
CN113438596B (en) Beidou and 5G fusion-oriented millimeter wave low-delay beamforming method
CN114636422A (en) Positioning and navigation method for information machine room scene
Varma et al. ReMAPP: Reverse multilateration based access point positioning using multivariate regression for indoor localization in smart buildings
CN112836784A (en) Magnetic moving target positioning method based on ant colony and L-M hybrid algorithm
Wang et al. An advanced algorithm for Fingerprint Localization based on Kalman Filter
CN114624688B (en) Tracking and positioning method based on multi-sensor combination
CN115190418B (en) High-precision positioning method for police wireless local area network
CN116166026A (en) Mobile robot elastic distributed positioning and tracking control method and device
Qin et al. Mutual Sensor Task Assignment and Distributed Track Fusion Method for Multi-UAV Sensors
CN115758060A (en) High maneuvering target passive radar tracking method based on OPTICS
CN118011324A (en) Long baseline underwater sound positioning method based on horizontal precision factor
CN115561749A (en) Multi-target self-adaptive tracking method based on prediction network and updating network
CN116304504A (en) Sensor optimization management method based on track time function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant