CN113433828A - Multi-robot dynamic target tracking cooperative control method - Google Patents
Multi-robot dynamic target tracking cooperative control method Download PDFInfo
- Publication number
- CN113433828A CN113433828A CN202110980698.8A CN202110980698A CN113433828A CN 113433828 A CN113433828 A CN 113433828A CN 202110980698 A CN202110980698 A CN 202110980698A CN 113433828 A CN113433828 A CN 113433828A
- Authority
- CN
- China
- Prior art keywords
- robot
- target
- robots
- dynamic target
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1682—Dual arm manipulator; Coordination of several manipulators
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Feedback Control In General (AREA)
Abstract
The invention discloses a multi-robot dynamic target tracking cooperative control method, which comprises the following steps: step 1, establishing a generalized model of a target tracking system of a moving target, wherein the generalized model comprises a kinematic model of the moving target and an observation model of the target tracking system; step 2, estimating the target state information at the current moment by adopting an extended Kalman filtering algorithm according to the measurement information of the mobile robot, performing data fusion on the state estimation of the targets by a plurality of robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target; step 3, solving the minimum perception quality on the premise of meeting the coverage quality based on an approximate greedy algorithm; and 4, planning the moving path of the multiple robots at the next moment to realize the cooperative tracking of the multiple robots. The invention can realize real-time tracking of dynamic targets by multiple robots, greatly reduces the calculated amount and ensures the tracking precision compared with the traditional algorithm.
Description
Technical Field
The invention relates to the technical field of multi-robot control systems, in particular to a multi-robot dynamic target tracking cooperative control method.
Background
The target tracking technology is an important research direction in the robot field, and the robot dynamic target tracking technology is widely applied in the military field and in the daily life of people. In military affairs, the method can be used for positioning, tracking and attacking enemy targets in battlefield monitoring and interception tasks. Only if the movement track of the enemy target is accurately obtained, the subsequent interception and attack tasks can be executed. In the civil field, service robots are more and more intelligent, intelligent mobile robots are used in more and more public places to provide services for people, such as intelligent robots in bank halls, and the key link of the intellectualization of the service robots is the target recognition and tracking of the robots. Dynamic target tracking is a hot problem in the field of robots at present, and more foreign and domestic scholars are dedicated to research on multi-robot dynamic target tracking technology and obtain some remarkable results.
A common method in multi-robot dynamic target tracking is a model-based prediction method. In order to avoid monitoring blind spots in a monitoring area and simultaneously consider the tracking accuracy of a dynamic target, the problem of multi-robot cooperative control can be modeled into an optimal control problem, namely, an optimal multi-robot control strategy is searched under the condition of meeting the constraint condition, so that the tracking accuracy is optimal. This problem is considered to be an NP-hard problem, i.e. the number of computations to find the optimal solution to the problem will grow exponentially as the number of robots increases. Even if the traditional greedy algorithm is used, local optimal solutions are searched by predicting the state of the target at the next moment at each moment, and the problems that the calculation amount is huge and real-time planning cannot be achieved are still faced.
Disclosure of Invention
In order to overcome the defects of the traditional estimation method for tracking the dynamic target of multiple robots, the invention provides a cooperative control method for tracking the dynamic target of multiple robots, which can realize the real-time tracking of the dynamic target by multiple robots, and can ensure the minimum perception quality while meeting the requirement of coverage quality.
The technical scheme of the invention is as follows: a multi-robot dynamic target tracking cooperative control method comprises a dynamic target G and n robots for observing the dynamic target G, and is characterized in that: further comprising the steps of:
And 2, measuring the distance between each robot and the dynamic target through an observation model of the target tracking system, estimating the state and the error covariance matrix of the target at the current moment, performing data fusion on the state estimation of the target by the n robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
And 3, solving the minimum perception quality under the condition of meeting the coverage quality based on an approximate greedy algorithm.
And 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and moving the robot to realize the tracking of the dynamic target G.
Preferably, the dynamic target kinematics model in step 1 is a CT model, and is described as X (k + 1) = AX (k) + W (k), wherein X is a state variable, A is a state transition matrix, W is system noise, k is an arbitrary time, and k is greater than or equal to 0.
Preferably, the observation model of the target tracking system in step 1 is: each robot carries an ultrasonic sensor capable of measuring the distance between the robot and the target, described as: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiIs the relative distance of the dynamic target from robot i:wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViFor the measurement noise of the robot itself, X is a state variable, and the index i indicates the ith robot, i =1,2, … n.
Preferably, step 2 is to obtain the state prediction and the error covariance prediction by an extended kalman filter algorithm based on the kinematic model, so as to update the state and error covariance matrix of the target current time.
Preferably, the method of weighted averaging is: and giving the same weight to the estimated values obtained by all the robots observing the target, namely, averaging the estimated values of all the robots to be used as the estimation of the target state, performing data fusion on the state estimation of the target by a plurality of robots to be used as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
Preferably, step 3 specifically comprises: step 3.1, defining the perceived quality JsenseDeterminant J of error covariance matrix P after fusionsense= det (p); step 3.2, defining the coverage quality as,AcovIs the area of the area covered by at least one robot, AfoiIs the total area of the monitoring range; step 3.3, defining a function I for judging the detection range of the robot IiFunction I when robot I is able to observeiIs 1; function I when robot I cannot observeiAt 0, further determining the coverage quality J by a double integral area determination methodcovAnd a detection range function IiThe coverage quality is solved by the expression between the two; and 3.4, solving the minimum perception quality on the premise of meeting the coverage quality according to an approximate greedy algorithm.
Preferably, step 3.4 is performed on the coverage quality JCOVIs required to be JCOV≥0.7。
Preferably, the approximate greedy algorithm in step 3.4 is an optimal control strategy for planning each robot one by one at any time, and the minimum perception quality is obtained through a traversal search method.
Preferably, the greedy approximation algorithm in step 3.4 specifically includes: assuming that the 1 st robot moves, fixing the positions of the other n-1 robots, and solving all possible perceived masses J meeting the coverage quality requirementsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; keeping the 1 st robot in the optimal position, assuming the 2 nd robot to move, keeping the other n-2 robots in the optimal position, and solving all possible perception qualities JsenseSelecting the optimal perceptual qualityMeasuring the position of the 2 nd robot corresponding to the quantity; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the rest n-2 robots, and when the robots are traversed to the nth robot, obtaining the minimum perception quality of the n robot groups.
Has the advantages that:
(1) the invention can realize the real-time tracking of the dynamic target by multiple robots by predicting the dynamic target and solving the optimal perception quality;
(2) the method carries out one-step prediction on the dynamic target by the approximate greedy algorithm, only needs 5n times of calculation at any time, and is compared with the 5 needed by the traditional traversal search methodnThe secondary calculation amount greatly reduces the calculation amount and improves the calculation speed; the target state error is within a confidence interval, so that the tracking precision is ensured;
(3) according to the invention, the optimal control strategy of each robot is planned one by one at any time through an approximate greedy algorithm, so that real-time planning can be realized, and the real-time performance of robot tracking is improved.
Drawings
FIG. 1 is a flowchart of a multi-robot dynamic target tracking cooperative control method according to an embodiment of the present invention;
FIG. 2 is an initial distribution diagram of a target real motion trajectory and a robot according to an embodiment of the present invention;
FIG. 3 is a comparison of the operation of the target real trajectory and the predicted trajectory in accordance with an embodiment of the present invention;
FIG. 4 is a comparison of a target real trajectory and a predicted trajectory in accordance with one embodiment of the present invention;
FIG. 7 is the present inventionY-direction position error of one embodiment of the inventionA confidence interval map;
fig. 9 is a graph of coverage quality data at each time instant in accordance with one embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a novel multi-robot dynamic target tracking cooperative control method, which comprises a dynamic target G and n robots for observing the dynamic target G, and as shown in figure 1, the method comprises the following steps:
The dynamic target kinematics model adopts a CT model, namely a uniform turning motion model, and is described as X (k + 1) = AX (k) + W (k), wherein k is any time, k is more than or equal to 0, X is a state variable,x and y represent the position of the dynamic object in a cartesian coordinate system,andrepresenting the speed of the dynamic target in the x-direction and the y-direction; a is a state transition matrix, W is system noise, a system noise sequence is assumed to be Gaussian white noise with a mean value of 0 and a covariance matrix of Q, and the state transition matrix A is as follows:where w represents the turning angular velocity and T is the sampling time.
The observation model of the target tracking system is as follows: each robot carries an ultrasonic sensor which can measure the distance between the robot and a target, and the measurement model of the target tracking system is as follows: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiIs the relative distance of the dynamic target from robot i:wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViThe noise is measured by the robot, the mean value of the measured noise is assumed to be 0, and the covariance matrix is Gaussian white noise of R; x is a state variable, and subscript i denotes the ith robot, i =1,2, … n. As shown in fig. 2, the initial distribution diagram of the target real motion trajectory and the robot is shown.
step 2.1, updating the state X of the target at the current moment by using an extended Kalman filter algorithmi(k | k) and the error covariance matrix Pi(k | k), specifically:
and (3) covariance prediction:wherein Q is a covariance matrix of the system noise W, and A is a state transition matrix;
by covariance predictioniJacobian matrix HiSolving a Kalman gain K by Gaussian white noise R:(ii) a And substituting the Kalman gain into the state update for solving the state variable X:and covariance update:。
step 2.2: a weighted average method in data fusion is adopted, namely estimated values obtained by all robots observing the target are endowed with weight wiAnd performing data fusion, and further predicting the dynamic target G by using the data fusion information. The method adopts the same weight for the estimated values of all the robots, namely, the average value of the estimated values of all the robots is taken as the optimal estimation of the target state at the current moment.
And 3, solving the minimum perception quality on the premise of meeting the coverage quality based on an approximate greedy algorithm.
Step 3.1, perceptual quality definitionThe determinant of the fused error covariance matrix P is as follows: j. the design is a squaresense=det(P)。
And 3.2, when designing a network scheduling algorithm of the mobile robot, comprehensively considering the coverage quality and the perception quality index, optimizing the perception quality and planning the moving path of the robot under the condition of meeting the coverage quality. The quality of coverage is defined as,AcovIs the area covered by at least one robot, the covered area range is related to the position of the mobile robot; as shown in FIG. 9, a graph of coverage quality data at each time, AfoiRepresenting the total area of the monitored range.
Step 3.3, assuming that the detection range of the robot i is riDefining a decision function Ii:Wherein, in the step (A),if the relative distance between a point in the monitoring area and the robot i is 1, the robot i can observe the point (x, y), and otherwise, the result is 0.
Therefore, based on the current positions of all robots, the coverage quality J in the current stateCOVCan be expressed as:wherein i =1, 2.. n denotes the i-th robot, afoiIs the total area of the monitoring range.
And 3.4, solving the minimum value of the predicted target perception quality at the next moment according to an approximate greedy algorithm under the condition that the constraint that the coverage quality is 0.7 is met. The method specifically comprises the following steps: it is assumed that all robots can move or keep still at the same speed in four directions, up, down, left and right, at any time, and therefore, at any time, there are five possible control strategies for each robot. Robot is selected to this embodimentThe number of the robot is 9, and according to the traditional traversal search method, the control strategy of 9 robots is 5 at any time9About 195 thousands of combinations, the calculation amount is huge, and real-time planning cannot be realized. Therefore, an approximate method is adopted to plan the optimal control strategy of each robot one by one at any time, specifically, the positions of the rest 8 robots are firstly fixed, the first robot is assumed to move, and all possible perception qualities J meeting the coverage quality requirement are solvedsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; at this time, the first robot is kept still at the optimal position, the 2 nd robot is assumed to move, the other 7 robots are kept still, each possible perception quality is solved, and the position of the 2 nd robot corresponding to the optimal perception quality is selected; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the remaining 7 robots, and finally obtaining the minimum perception quality. By the method, 5 x 9 times, namely 45 times of calculation is required at any time, and the calculation amount is greatly reduced.
And 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and realizing the cooperative tracking of the multiple robots on the dynamic target G by the mobile robot.
The steps are iterative operation steps, and each time is iterated once.
The embodiment of the invention is implemented as follows: the simulation experiment is carried out by using Matlab 2019b, wherein the monitoring range is a square area of 120 meters by 120 meters, 9 mobile robots are uniformly distributed at the initial moment, the observation range of each robot is a circular observation area with the radius of 20 meters, and the moving speed is 1 m/s. The initial state of the dynamic target is an initial position (90, 0) and initial velocity (120 m/s, 230 m/s), and the uniform turning motion is performed at a velocity of w =5 rad/s. The total observation time is 50s, the sampling time is 1s, and the minimum coverage quality requirement is 0.7. The experimental procedure was as follows: step 1, establishing a generalized model of a target tracking system. And 2, estimating the current state of the target, namely the position and the speed of the target according to the measurement data obtained by all the robots capable of detecting the dynamic target at the current moment, and predicting the position and the speed of the target at the next moment. And 3, sequentially selecting an optimal moving strategy of each robot according to the prediction of the next moment of the target, so that multiple robots can perform optimal state estimation on the target at the next moment under the constraint condition that the coverage quality is 0.7. And 4, moving the multiple robots according to the result of the step 3, and returning to the step 1.
Fig. 2 shows the real trajectory of the target and the initial distribution of the robots, and it can be seen that only 1 robot can observe the moving target at the initial time. Fig. 3 is a diagram showing a comparison between a target real track and a predicted track at half the time of dynamic target tracking, and fig. 4 is a diagram showing a comparison between a target real track and a predicted track in the whole tracking process. The real target track and the predicted track are highly overlapped, and the feasibility and the high goodness of fit of the method are verified. Meanwhile, the requirement of coverage quality is met, so that the situation that all robots are concentrated near a target cannot occur, and different task requirements can be met by adjusting coverage quality parameters. Fig. 5-8 show the error in the estimation of the target state. Defining four state variable errors Ex、Evx、Ey、EvyRespectively representing the position error of the state variable in the x and y directions and the speed error in the x and y directions,wherein X isekfX respectively represents an estimated value and a true value of the position of the target in the X direction;wherein、Respectively representing an estimated value and a true value representing the velocity of the target in the x direction;wherein Y isekfY respectively represents an estimated value and a true value of the position of the target in the Y direction;whereinY represents the estimated and true values of the velocity of the target in the Y direction, respectively. Representing the error between the target actual state and the estimated state by four state variable errorsThe confidence interval is the situation when the offset is not considered. From fig. 5 to fig. 8, it can be seen that the errors of the four state variables are all within the range of the confidence interval, and it is verified that the dynamic target can be accurately tracked by adopting the method.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A multi-robot dynamic target tracking cooperative control method comprises a dynamic target G and n robots for observing the dynamic target G, wherein n is the number of the robots and n is more than or equal to 2, and the method is characterized in that: further comprising the steps of:
step 1, establishing a generalized model of a target tracking system of a dynamic target, wherein the generalized model comprises a kinematic model of the dynamic target and an observation model of the target tracking system; judging whether a dynamic target can be detected or not through a sensor carried by the mobile robot, if so, entering the step 2, and otherwise, entering the step 3;
step 2, measuring the distance between each robot and a dynamic target through an observation model of a target tracking system, estimating the state and error covariance matrix of the target at the current moment, performing data fusion on the state estimation of the target by the n robots by using a weighted average method to serve as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G;
step 3, solving the minimum perception quality under the condition of meeting the coverage quality based on an approximate greedy algorithm;
and 4, planning the optimal robot position at the next moment according to the minimum perception quality obtained by the solution in the step 3, and moving the robot to realize the tracking of the dynamic target G.
2. The multi-robot dynamic target tracking cooperative control method according to claim 1, characterized in that: the dynamic target kinematics model in the step 1 is a CT model, and is described as X (k + 1) = ax (k) + W (k), where X is a state variable, a is a state transition matrix, W is system noise, k is any time, and k is greater than or equal to 0.
3. The multi-robot dynamic target tracking cooperative control method according to claim 2, characterized in that: the observation model of the target tracking system in the step 1 is as follows: each robot carries an ultrasonic sensor capable of measuring the distance between the robot and the target, described as: zi(k)=hi(X(k))+Vi(k),ZiIs a measured value of robot i, hiIs the relative distance of the dynamic target from robot i:wherein (x)i,yi) As the position of the mobile robot i, (x, y) as the position of the dynamic object G, ViFor the measurement noise of the robot itself, X is a state variable, and the index i indicates the ith robot, i =1,2, … n.
4. The multi-robot dynamic target tracking cooperative control method according to claim 3, characterized in that: and 2, solving state prediction and error covariance prediction through an extended Kalman filtering algorithm based on the kinematic model so as to update the state and error covariance matrix of the target at the current moment.
5. The multi-robot dynamic target tracking cooperative control method according to claim 4, characterized in that: the weighted average method comprises the following steps: and giving the same weight to the estimated values obtained by all the robots observing the target, namely, averaging the estimated values of all the robots to be used as the estimation of the target state, performing data fusion on the state estimation of the target by a plurality of robots to be used as the optimal estimation of the target state at the current moment, and performing one-step prediction on the dynamic target G.
6. The multi-robot dynamic target tracking cooperative control method according to claim 1 or 5, characterized in that: the step 3 specifically comprises the following steps:
step 3.1, defining the perceived quality JsenseDeterminant J of error covariance matrix P after fusionsense=det(P);
Step 3.2, defining the coverage quality as,AcovIs the area of the area covered by at least one robot, AfoiIs the total area of the monitoring range;
step 3.3, defining a function I for judging the detection range of the robot IiFunction I when robot I is able to observeiIs 1; function I when robot I cannot observeiAt 0, further determining the coverage quality J by a double integral area determination methodcovAnd a detection range function IiThe coverage quality is solved by the expression between the two;
and 3.4, solving the minimum perception quality on the premise of meeting the coverage quality according to an approximate greedy algorithm.
7. The multi-robot dynamic target tracking cooperative control method according to claim 6, characterized in that: the coverage quality J in the step 3.4COVIs required to be JCOV≥0.7。
8. The multi-robot dynamic target tracking cooperative control method according to claim 7, characterized in that: the approximate greedy algorithm in the step 3.4 is an optimal control strategy for planning each robot one by one at any time, and the minimum perception quality is obtained through a traversal search method.
9. The multi-robot dynamic target tracking cooperative control method according to claim 8, characterized in that: the greedy approximation algorithm in the step 3.4 specifically comprises: assuming that the 1 st robot moves, fixing the positions of the other n-1 robots, and solving all possible perceived masses J meeting the coverage quality requirementsenseSelecting the position of the 1 st robot corresponding to the optimal perception quality; keeping the 1 st robot in the optimal position, assuming the 2 nd robot to move, keeping the other n-2 robots in the optimal position, and solving all possible perception qualities JsenseSelecting the position of the 2 nd robot corresponding to the optimal perception quality; keeping the 1 st robot and the 2 nd robot to be still at the optimal positions, continuously traversing the rest n-2 robots, and when the robots are traversed to the nth robot, obtaining the minimum perception quality of the n robot groups.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110980698.8A CN113433828B (en) | 2021-08-25 | 2021-08-25 | Multi-robot dynamic target tracking cooperative control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110980698.8A CN113433828B (en) | 2021-08-25 | 2021-08-25 | Multi-robot dynamic target tracking cooperative control method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113433828A true CN113433828A (en) | 2021-09-24 |
CN113433828B CN113433828B (en) | 2022-01-18 |
Family
ID=77797840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110980698.8A Active CN113433828B (en) | 2021-08-25 | 2021-08-25 | Multi-robot dynamic target tracking cooperative control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113433828B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114200833A (en) * | 2021-11-24 | 2022-03-18 | 华中科技大学 | Observer-based robot network dynamic area coverage control method |
CN116872221A (en) * | 2023-09-08 | 2023-10-13 | 湖南大学 | Data driving bipartite uniform control method for multi-machine cooperative rotation large-sized workpiece |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090002489A1 (en) * | 2007-06-29 | 2009-01-01 | Fuji Xerox Co., Ltd. | Efficient tracking multiple objects through occlusion |
CN101630413A (en) * | 2009-08-14 | 2010-01-20 | 浙江大学 | Multi-robot tracked mobile target algorithm |
CN102647726A (en) * | 2012-02-17 | 2012-08-22 | 无锡英臻科技有限公司 | Balancing optimizing strategy for energy consumption of coverage of wireless sensor network |
CN107659989A (en) * | 2017-10-24 | 2018-02-02 | 东南大学 | The dormancy of wireless sensor network node distributed measurement and method for tracking target |
CN107703970A (en) * | 2017-11-03 | 2018-02-16 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle cluster surrounding tracking method |
CN109040969A (en) * | 2018-08-10 | 2018-12-18 | 武汉科技大学 | Intelligent Robotic Car optimal acquisition point position selecting method under indoor environment |
CN109298725A (en) * | 2018-11-29 | 2019-02-01 | 重庆大学 | A kind of Group Robots distributed multiple target tracking method based on PHD filtering |
CN111931384A (en) * | 2020-09-01 | 2020-11-13 | 中国人民解放军国防科技大学 | Group cooperative trapping method based on antenna model and storage medium |
-
2021
- 2021-08-25 CN CN202110980698.8A patent/CN113433828B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090002489A1 (en) * | 2007-06-29 | 2009-01-01 | Fuji Xerox Co., Ltd. | Efficient tracking multiple objects through occlusion |
CN101630413A (en) * | 2009-08-14 | 2010-01-20 | 浙江大学 | Multi-robot tracked mobile target algorithm |
CN102647726A (en) * | 2012-02-17 | 2012-08-22 | 无锡英臻科技有限公司 | Balancing optimizing strategy for energy consumption of coverage of wireless sensor network |
CN107659989A (en) * | 2017-10-24 | 2018-02-02 | 东南大学 | The dormancy of wireless sensor network node distributed measurement and method for tracking target |
CN107703970A (en) * | 2017-11-03 | 2018-02-16 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle cluster surrounding tracking method |
CN109040969A (en) * | 2018-08-10 | 2018-12-18 | 武汉科技大学 | Intelligent Robotic Car optimal acquisition point position selecting method under indoor environment |
CN109298725A (en) * | 2018-11-29 | 2019-02-01 | 重庆大学 | A kind of Group Robots distributed multiple target tracking method based on PHD filtering |
CN111931384A (en) * | 2020-09-01 | 2020-11-13 | 中国人民解放军国防科技大学 | Group cooperative trapping method based on antenna model and storage medium |
Non-Patent Citations (2)
Title |
---|
PRATAP TOKEKAR 等: ""Multi-Target Visual Tracking With Aerial Robots"", 《IROS 2014》 * |
杜永浩 等: ""无人飞行器集群智能调度技术综述"", 《自动化学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114200833A (en) * | 2021-11-24 | 2022-03-18 | 华中科技大学 | Observer-based robot network dynamic area coverage control method |
CN114200833B (en) * | 2021-11-24 | 2024-04-12 | 华中科技大学 | Control method for dynamic area coverage of robot network based on observer |
CN116872221A (en) * | 2023-09-08 | 2023-10-13 | 湖南大学 | Data driving bipartite uniform control method for multi-machine cooperative rotation large-sized workpiece |
CN116872221B (en) * | 2023-09-08 | 2023-12-05 | 湖南大学 | Data driving bipartite uniform control method for multi-machine cooperative rotation large-sized workpiece |
Also Published As
Publication number | Publication date |
---|---|
CN113433828B (en) | 2022-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113433828B (en) | Multi-robot dynamic target tracking cooperative control method | |
CN109597864B (en) | Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering | |
CN108594834A (en) | One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known | |
CN111913484B (en) | Path planning method of transformer substation inspection robot in unknown environment | |
CN109212476B (en) | RFID indoor positioning algorithm based on DDPG | |
CN113438596B (en) | Beidou and 5G fusion-oriented millimeter wave low-delay beamforming method | |
Mallick et al. | Out-of-sequence measurement processing for tracking ground target using particle filters | |
CN116953692A (en) | Track association method under cooperative tracking of active radar and passive radar | |
Han et al. | A multi-platform cooperative localization method based on dead reckoning and particle filtering | |
CN112800889B (en) | Target tracking method based on distributed matrix weighted fusion Gaussian filtering | |
CN114545968A (en) | Unmanned aerial vehicle cluster multi-target tracking trajectory planning method based on pure orientation positioning | |
CN113534164B (en) | Target path tracking method based on active-passive combined sonar array | |
CN114916059A (en) | WiFi fingerprint sparse map extension method based on interval random logarithm shadow model | |
Li et al. | Dynamic sensor management for multisensor multitarget tracking | |
CN109658742B (en) | Dense flight autonomous conflict resolution method based on preorder flight information | |
CN114636422A (en) | Positioning and navigation method for information machine room scene | |
Varma et al. | ReMAPP: reverse multilateration based access point positioning using multivariate regression for indoor localization in smart buildings | |
CN117788511B (en) | Multi-expansion target tracking method based on deep neural network | |
CN115190418B (en) | High-precision positioning method for police wireless local area network | |
CN114624688B (en) | Tracking and positioning method based on multi-sensor combination | |
Qin et al. | Mutual Sensor Task Assignment and Distributed Track Fusion Method for Multi-UAV Sensors | |
Xiang et al. | Target tracking via recursive bayesian state estimation in radar networks | |
CN118011324A (en) | Long baseline underwater sound positioning method based on horizontal precision factor | |
CN116304504A (en) | Sensor optimization management method based on track time function | |
CN116166026A (en) | Mobile robot elastic distributed positioning and tracking control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |