CN110162035B - Cooperative motion method of cluster robot in scene with obstacle - Google Patents

Cooperative motion method of cluster robot in scene with obstacle Download PDF

Info

Publication number
CN110162035B
CN110162035B CN201910218602.7A CN201910218602A CN110162035B CN 110162035 B CN110162035 B CN 110162035B CN 201910218602 A CN201910218602 A CN 201910218602A CN 110162035 B CN110162035 B CN 110162035B
Authority
CN
China
Prior art keywords
robot
robots
jth
formation
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910218602.7A
Other languages
Chinese (zh)
Other versions
CN110162035A (en
Inventor
谢志鹏
成慧
龙有炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201910218602.7A priority Critical patent/CN110162035B/en
Publication of CN110162035A publication Critical patent/CN110162035A/en
Application granted granted Critical
Publication of CN110162035B publication Critical patent/CN110162035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0217Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with energy consumption, time reduction or distance reduction criteria
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of robots, in particular to a cooperative motion method of a robot cluster in a scene with an obstacle. The method provided by the invention is used for realizing formation change, dynamic and static obstacle avoidance and cooperative motion of multiple robots in a scene with obstacles. According to the method, cooperative control of the clustered robots in a complex environment is defined as a distributed model predictive control problem, each robot independently solves an optimization equation according to information of local neighbors, and the method has good robustness and flexibility. Meanwhile, the method provides a formation diagram concept, the formation of the robot is represented by a directed graph, and on the premise of ensuring the connectivity of the graph, the local transformation of the expected formation is realized by changing the edges and the weights of the formation diagram, so that the connectivity and the flexibility of the cluster can be ensured simultaneously. In addition, the method provided by the patent has a hierarchical structure, and by adopting the hierarchical structure, the coupling between modules can be reduced, and the algorithm design and expansion are facilitated.

Description

Cooperative motion method of cluster robot in scene with obstacle
Technical Field
The invention relates to the technical field of swarm intelligence and robot control, in particular to a coordinated movement method of a swarm robot in a scene with an obstacle.
Background
With the continuous development of robot technology, automated robot systems play an important role in more and more fields, such as industrial robots, household robots, service robots, and the like. Therefore, automatic navigation of robots in structured environments has been a popular area of research. Compared with the conventional human power system, the robot system has various advantages such as economy, time and safety. Compared with a single robot system, the multi-robot system has the advantages of being capable of cooperatively executing tasks and wider application scenarios, such as exploration in dangerous environments, monitoring in agriculture and military tasks under extreme conditions. With the same complex task, the multi-robot system can be completed more efficiently and with lower cost than a single robot, and has better fault tolerance (the operation of the whole system cannot be influenced when one robot fails), self-adaptation and flexibility. Because of these advantages of multi-robot systems and the ever-increasing computing power of contemporary processors, multi-mobile robot systems have attracted increasing attention from both domestic and foreign learners in recent years.
Formation control refers to the control problem that a team of multiple robots maintain a predetermined geometry (i.e., formation) with respect to each other while moving towards a specific target or direction, while accommodating environmental constraints (e.g., avoiding obstacles). The multi-robot formation control needs to solve the following problems:
(1) how the robot determines its desired position in the formation;
(2) how the robot determines the actual position of the robot in the formation;
(3) how the robot moves to maintain formation;
(4) how the robot can cope when encountering an obstacle.
In the prior art, many methods are proposed to solve the above problems, including a navigator-follower algorithm, a virtual structure method, a behavior-based method, an artificial potential field method, and an optimization-based method. However, in the scene with obstacles, the robot cluster cannot move in a fixed formation, and the method is not fully applicable. In a constrained environment, the robot cluster should adaptively change the desired formation according to the environmental information detected by the sensor, and how to select a new formation and how to implement formation switching is a challenging problem, which is more complicated for the cluster with a distributed architecture.
Disclosure of Invention
The invention provides a coordinated movement method of a cluster robot in a scene with obstacles to overcome at least one defect in the prior art, and provides a local formation transformation method for controlling the coordinated movement of the cluster robot in the scene with obstacles.
In order to solve the technical problems, the invention adopts the technical scheme that: a coordinated movement method of clustered robots in a scene with obstacles comprises the following steps:
s1, before a robot cluster starts to move, manually setting a key path point or drawing an expected path through a planning and calculation rule for a navigator of the cluster to follow, realizing cooperative movement of the whole cluster by a follower through the following of the navigator, and expressing an expected formation and a relationship between robots by using a formation graph;
s2, detecting and sensing obstacles in the surrounding environment and state information of a non-neighbor robot in real time by using an airborne sensor in the moving process of the robot;
s3, each robot issues own position and speed information through a local area network, meanwhile, the position and speed information of the adjacent robot is obtained, and the weight values of the edges and the edges in the formation diagram are changed according to the state of the adjacent robot so as to implement mutual collision avoidance between the robots;
s4, using the information obtained in the steps S2 and S3 as the input of a distributed MPC (model predictive control) algorithm, solving an optimization problem to obtain a control sequence and a state sequence, selecting a first control quantity as the optimal control quantity of the robot at the current moment, inputting the optimal control quantity to act on the robot, and driving the robot to reach an expected target point;
s5, repeating the steps S2 to S4 until reaching the end point.
Further, in the step S1, the robot cluster is modeled by using graph theory, and for the cluster with n robots, a formation graph is used
Figure BDA0002002849230000021
To represent the cluster relationships,
Figure BDA0002002849230000022
is a set of vertexes representing robots, is a set of directed edges representing the direction of information flow between robots,
Figure BDA0002002849230000023
indicating the desired distance between the robots,
Figure BDA0002002849230000024
representing the desired relative azimuth of the robot.
In the coordinated movement task of the clustered robots, each robot needs to determine its desired position according to the positions of neighbors. According to the navigator-follower algorithm, the follower can determine its position in the fleet relative to other robots by two methods, the first being a distance-orientation controller and the second being a distance-distance controller. For a cluster of n robots, a team chart is used in the invention
Figure BDA0002002849230000025
To represent the cluster relationships,
Figure BDA0002002849230000031
is a set of vertexes representing robots, is a set of directed edges representing the direction of information flow between robots,
Figure BDA0002002849230000032
indicating the desired distance between the robots,
Figure BDA0002002849230000033
representing the desired relative azimuth of the robot. The formation graph is a directed graph and comprises three elements of nodes, edges and distances, wherein the nodes represent the robot individuals, the edges represent the neighbor relations, namely the information flow direction, and the distances represent the expected distances among the neighbors.
Further, the weight change of the edge in the step S3 specifically includes the following steps:
s311, adding a residual error item of the distance between the robots and the expected distance in the model prediction controller to realize formation maintenance among the robots, wherein the residual error item is expressed as:
Figure BDA0002002849230000034
fixed weight ωijThe robot can have good distance keeping performance, but the robot can lack flexibility when passing through an obstacle environment, so the invention proposes to adaptively change the formation weight omegaijA method of a parameter;
s312, for the ith robot and the jth robot, defining the safety distance margin as follows:
lij=||pi-pj||-2r
wherein p isiAnd pjRespectively representing the ith and jth robot positions, r being the radius of the robot due to hard constraints
Figure BDA0002002849230000035
Can guarantee lijAnd (3) more than or equal to 0, and deriving the time by the formula to obtain the distance change rate of the ith robot and the jth robot as follows:
Figure BDA0002002849230000036
therefore, for the jth robot, the collision time with the ith robot can be obtained as follows:
tij=lij/lij
time of collision tijDescribing the time urgency of the collision of the two robots, and the size and the direction of the collision time can be used for determining the collision condition between the two robots; when t isijWhen 0, it means lijWhen the collision between the two robots is more likely to happen, the situation that we should avoid is shown as 0; when t isij> 0, means lijWhen the distance between the two robots is larger than 0, the distance between the two robots is gradually larger, the two robots are far away, and t isijThe larger the distance between the two robots is. When t isij< 0, meaning lij< 0, this time indicating the distance margin between the two robotsAt a gradual decrease, two robots are approaching, tijThe smaller the absolute value of (a), the faster the two robots approach. So the time of collision tijDescribing the time urgency of a collision of two robots;
s313, if the ith robot and the jth robot are in a formation graph
Figure BDA0002002849230000041
Are adjacent and the collision time t of the two robotsijWhen < 0 and the absolute value is small, the corresponding weight parameter omegaijShould be increased; ω can be represented by a zero-mean Gaussian density functionijThe change of (2):
Figure BDA0002002849230000042
where k > 0 is the peak of the Gaussian density function, σ2The smaller the degree of sensitivity to collision time, the smaller the2Such that the weight parameter omegaijThe more sensitive the change in.
Further, in order to maintain the stability of the formation and reduce the frequency of weight adjustment, the adaptive weight change is triggered only when the neighbor enters the dangerous range of the robot.
Further, in the step S3, the method for local formation transformation specifically includes the following steps:
s321, defining the distortion coefficient of the distance of the jth robot as follows:
Figure BDA0002002849230000043
wherein s isij∈SfIs the expected distance of the jth robot relative to the ith robot, dijis the actual distance between the jth robot and the ith robot, when etaijwhen the distance between the two robots is equal to 0, the distance between the two robots is exactly equal to the expected distance, and when eta is equal to 0ijwhen the current position is greater than 0, the ith robot has traction to the jth robot, and when eta is greater than 0ijIf < 0, the i-th device is describedThe robot will have a repulsive force to the jth robot;
s322. if in the formation graph, there is a directed edge (v)i,vj) And (v)k,vj) meaning that the jth robot is required to maintain a desired distance from both the ith and kth robots, when ηij< 0 and ηkj>ηthresholdIn this case, the jth robot needs to delete the directed edge (v)k,vj) The requirement of keeping a desired distance from the kth robot is removed, and the motion constraint on the jth robot is relaxed, so that the jth robot can find a feasible solution in a constraint environment more easily;
s323, when the jth robot observes a non-neighbor robot in a perception range through a sensor, if the jth robot observes the jth robot and the jth robot is not a neighbor member of the jth robot in the formation graph, the collision time t obtained through observation calculation is setuj< 0, which means that for the jth robot, which was not previously a neighbor to it, there is a robot approaching it, in which case the jth robot is kept a distance from the uth robot to avoid collision, there will be a directed edge (v)u,vj) Adding the data to the original formation topological structure chart and correspondingly setting the expected distance sujAnd adding the distance data into the original expected distance set, thereby changing the original formation diagram of the clustered robots.
The method provided by the invention is used for realizing formation change, dynamic and static obstacle avoidance and cooperative motion of multiple robots in a scene with obstacles. According to the method, cooperative control of the clustered robots in a complex environment is defined as a distributed model predictive control problem, each robot independently solves an optimization equation according to information of local neighbors, and the method has good robustness and flexibility. Meanwhile, the method provides a formation diagram concept, the formation of the robot is represented by a directed graph, and on the premise of ensuring the connectivity of the graph, the local transformation of the expected formation is realized by changing the edges and the weights of the formation diagram, so that the connectivity and the flexibility of the cluster can be ensured simultaneously. In addition, the method proposed in this patent has a hierarchical structure: the path planning layer, the formation decision layer and the formation keeping layer adopt the hierarchical structure, can reduce the coupling among the modules, and are convenient for algorithm design and expansion
Compared with the prior art, the beneficial effects are:
1. expanding a single robot path planning algorithm to a multi-robot system; in the robot cluster, a navigator is defined, the navigator executes the functions of path planning and navigation, and the other robots (followers) execute the functions of formation keeping and obstacle avoidance, so that the path planning and navigation of the robot cluster are realized. Therefore, the path planning algorithm is decoupled from the formation control algorithm, so that various algorithms for path planning of the single robot can be adopted;
2. a hierarchical system structure is designed; under a constrained environment, the robot cluster needs to adaptively change the formation according to the environment; the coupling among the modules can be reduced, and the algorithm design and expansion are facilitated;
3. providing a self-adaptive formation transformation algorithm; the robot dynamically updates the edges and the weights of the edges in the formation graph in real time according to the state information of the local neighbors, so that the local transformation of the formation of the robot is realized, and the transformation of the whole expected formation is not involved.
Drawings
FIG. 1 is an overall process flow diagram of the present invention.
Fig. 2 is a schematic diagram of a formation diagram of the present invention.
Fig. 3 is a flow chart of the partial formation transformation method of the present invention.
Fig. 4 is an effect diagram of the partial formation transformation method of the present invention.
Detailed Description
The drawings are for illustration purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
As shown in fig. 1, a coordinated movement method of clustered robots in a complex scene includes the following steps:
step 1, before a robot cluster starts to move, a key path point needs to be manually given or an expected path is drawn through a planning and calculation rule so as to be followed by a pilot of the cluster, and a follower realizes the cooperative motion of the whole cluster through the following of the pilot.
In the coordinated movement task of the clustered robots, each robot needs to determine its desired position according to the positions of neighbors. Modeling the robot cluster by using a graph theory method; for a cluster of n robots, using a formation graph
Figure BDA0002002849230000061
To represent the cluster relationships,
Figure BDA0002002849230000062
is a set of vertexes representing robots, is a set of directed edges representing the direction of information flow between robots,
Figure BDA0002002849230000063
indicating the desired distance between the robots,
Figure BDA0002002849230000064
representing the desired relative azimuth of the robot as shown in figure 2.
Step 2, the robot detects and senses obstacles around the robot and state information of a non-neighbor robot in real time by using a recording sensor in the moving process;
step 3, each robot issues own position and speed information through local area network communication, and simultaneously obtains the position and speed information of the neighbor robot, and changes the edges and the weight of the edges in the formation graph according to the state of the neighbor;
wherein, the weight change of the edge specifically comprises the following steps:
s311, adding a residual error item of the distance between the robots and the expected distance in the model prediction controller to realize formation maintenance among the robots, wherein the residual error item is expressed as:
Figure BDA0002002849230000065
fixed weight ωijThe robot can have good distance keeping performance, but the robot can lack flexibility when passing through an obstacle environment, so the invention proposes to adaptively change the formation weight omegaijA method of a parameter;
s312, for the ith robot and the jth robot, defining the safety distance margin as follows:
lij=||pi-pj||-2r
wherein p isiAnd pjRespectively representing the ith and jth robot positions, r being the radius of the robot due to hard constraints
Figure BDA0002002849230000066
Can guarantee lijAnd (3) more than or equal to 0, and deriving the time by the formula to obtain the distance change rate of the ith robot and the jth robot as follows:
Figure BDA0002002849230000071
therefore, for the jth robot, the collision time with the ith robot can be obtained as follows:
tij=lij/lij
time of collision tijDescribing the time urgency of the collision of the two robots, and the size and the direction of the collision time can be used for determining the collision condition between the two robots; when t isijWhen 0, it means lijWhen the collision between the two robots is more likely to happen, the situation that we should avoid is shown as 0; when t isij> 0, means lijIs greater than 0, which indicates that the distance margin between the two robots is gradually increased,two robots are moving away, tijThe larger the distance between the two robots is. When t isij< 0, meaning lij< 0, this time it means that the distance margin between the two robots is gradually decreasing, the two robots are approaching, tijThe smaller the absolute value of (a), the faster the two robots approach. So the time of collision tijDescribing the time urgency of a collision of two robots;
s313, if the ith robot and the jth robot are in a formation graph
Figure BDA0002002849230000072
Are adjacent and the collision time t of the two robotsijWhen < 0 and the absolute value is small, the corresponding weight parameter omegaijShould be increased; ω can be represented by a zero-mean Gaussian density functionijThe change of (2):
Figure BDA0002002849230000073
where k > 0 is the peak of the Gaussian density function, σ2The smaller the degree of sensitivity to collision time, the smaller the2Such that the weight parameter omegaijThe more sensitive the change in.
In order to maintain the stability of the formation and reduce the frequency of weight adjustment, a trigger condition is set for the adaptive weight transformation, and when a neighbor enters a dangerous range (dangerous range > safe range > radius) of the robot, the adaptive weight is triggered to change.
In addition, the method for local formation transformation specifically comprises the following steps:
s321, defining the distortion coefficient of the distance of the jth robot as follows:
Figure BDA0002002849230000074
wherein s isij∈SfIs the expected distance of the jth robot relative to the ith robot, dijIs the j robot and the i machineactual distance of the robot, whenijwhen the distance between the two robots is equal to 0, the distance between the two robots is exactly equal to the expected distance, and when eta is equal to 0ijwhen the current position is greater than 0, the ith robot has traction to the jth robot, and when eta is greater than 0ijIf the number is less than 0, the ith robot can have repulsive force to the jth robot;
s322. if in the formation graph, there is a directed edge (v)i,vj) And (v)k,vj) meaning that the jth robot is required to maintain a desired distance from both the ith and kth robots, when ηij< 0 and ηkj>ηthresholdIn this case, the jth robot needs to delete the directed edge (v)k,vj) The requirement of keeping a desired distance from the kth robot is removed, and the motion constraint on the jth robot is relaxed, so that the jth robot can find a feasible solution in a constraint environment more easily;
s323, when the jth robot observes a non-neighbor robot in a perception range through a sensor, if the jth robot observes the jth robot and the jth robot is not a neighbor member of the jth robot in the formation graph, the collision time t obtained through observation calculation is setuj< 0, which means that for the jth robot, which was not previously a neighbor to it, there is a robot approaching it, in which case the jth robot is kept a distance from the uth robot to avoid collision, there will be a directed edge (v)u,vj) Adding the data to the original formation topological structure chart and correspondingly setting the expected distance sujAnd adding the distance data into the original expected distance set, thereby changing the original formation diagram of the clustered robots.
And 4, taking the information obtained in the step 2 and the step 3 as the input of a distributed MPC algorithm, obtaining a control sequence and a state sequence by solving an optimization problem, selecting a first control quantity as the optimal control quantity of the robot at the current moment, inputting the optimal control quantity to act on the robot, and driving the robot to reach an expected target point. The overall flow is shown in fig. 3, and the specific implementation effect is shown in fig. 4.
And 5, repeating the steps 2 to 4 until the end point is reached.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (4)

1. A coordinated movement method of clustered robots in a scene with obstacles is characterized by comprising the following steps:
s1, before a robot cluster starts to move, manually setting a key path point or drawing an expected path through a planning and calculation rule for a navigator of the cluster to follow, realizing cooperative movement of the whole cluster by a follower through the following of the navigator, and expressing an expected formation and a relationship between robots by using a formation graph;
s2, detecting and sensing obstacles in the surrounding environment and state information of a non-neighbor robot in real time by using an airborne sensor in the moving process of the robot;
s3, each robot issues own position and speed information through a local area network, meanwhile, the position and speed information of the adjacent robot is obtained, and the weight values of the edges and the edges in the formation diagram are changed according to the state of the adjacent robot so as to implement mutual collision avoidance between the robots; wherein the weight change specifically comprises the following steps:
s311, adding a residual error item of the distance between the robots and the expected distance in the model prediction controller to realize formation maintenance among the robots, wherein the residual error item is expressed as:
Figure FDA0002438119700000011
s312, for the ith robot and the jth robot, defining the safety distance margin as follows:
lij=||pi-pj||-2r
wherein p isiAnd pjRespectively representing the ith and jth robot positions, r being the radius of the robot due to hard constraints
Figure FDA0002438119700000012
Can guarantee lijAnd (3) more than or equal to 0, and deriving the time by the formula to obtain the distance change rate of the ith robot and the jth robot as follows:
Figure FDA0002438119700000013
therefore, for the jth robot, the collision time with the ith robot can be obtained as follows:
Figure FDA0002438119700000014
time of collision tijDescribing the time urgency of the collision of the two robots, and the size and the direction of the collision time can be used for determining the collision condition between the two robots;
s313, if the ith robot and the jth robot are in a formation graph
Figure FDA0002438119700000015
Are adjacent and the collision time t of the two robotsijWhen < 0 and the absolute value is small, the corresponding weight parameter omegaijShould be increased; ω can be represented by a zero-mean Gaussian density functionijThe change of (2):
Figure FDA0002438119700000021
where k > 0 is the peak of the Gaussian density function, σ2Indicating sensitivity to time of impactDegree, smaller σ2Such that the weight parameter omegaijThe more sensitive the change in (c);
s4, using the information obtained in the steps S2 and S3 as input of a distributed MPC algorithm, obtaining a control sequence and a state sequence by solving an optimization problem, selecting a first control quantity as an optimal control quantity of the robot at the current moment, inputting the optimal control quantity and acting the optimal control quantity on the robot, and driving the robot to reach an expected target point;
s5, repeating the steps S2 to S4 until reaching the end point.
2. The method as claimed in claim 1, wherein the step S1 is performed by modeling the robot cluster using graph theory, and for the cluster of n robots, using a formation graph
Figure FDA0002438119700000022
To represent the cluster relationships,
Figure FDA0002438119700000023
is a set of vertexes representing robots, is a set of directed edges representing the direction of information flow between robots,
Figure FDA0002438119700000024
indicating the desired distance between the robots,
Figure FDA0002438119700000025
representing the desired relative azimuth of the robot.
3. The cooperative movement method of clustered robots in a scene with obstacles as claimed in claim 2, wherein in order to maintain the stability of the formation of the queue, the frequency of weight adjustment is reduced, and the adaptive weight change is triggered only when the neighbor enters into the dangerous range of the robots.
4. The coordinated movement method of clustered robots in a scene with obstacles according to claim 3, wherein the step of S3, changing the edges in the formation graph according to the states of the neighbors specifically comprises the following steps:
s321, defining the distortion coefficient of the distance of the jth robot as follows:
Figure FDA0002438119700000026
wherein s isij∈SfIs the expected distance of the jth robot relative to the ith robot, dijis the actual distance between the jth robot and the ith robot, when etaijwhen the distance between the two robots is equal to 0, the distance between the two robots is exactly equal to the expected distance, and when eta is equal to 0ijwhen the current position is greater than 0, the ith robot has traction to the jth robot, and when eta is greater than 0ijIf the number is less than 0, the ith robot can have repulsive force to the jth robot;
s322. if in the formation graph, there is a directed edge (v)i,vj) And (v)k,vj) meaning that the jth robot is required to maintain a desired distance from both the ith and kth robots, when ηij< 0 and ηkj>ηthresholdIn this case, the jth robot needs to delete the directed edge (v)k,vj) The requirement of keeping a desired distance from the kth robot is removed, and the motion constraint on the jth robot is relaxed, so that the jth robot can find a feasible solution in a constraint environment more easily;
s323, when the jth robot observes a non-neighbor robot in a perception range through a sensor, if the jth robot observes the jth robot and the jth robot is not a neighbor member of the jth robot in the formation graph, the collision time t obtained through observation calculation is setuj< 0, this means that for the jth robot, a robot that was not previously a neighbor member of the jth robot is approaching, in which case the jth robot is kept away from the uth robot to avoid thisCollision, will have an edge (v)u,vj) Adding the data to the original formation topological structure chart and correspondingly setting the expected distance sujAnd adding the distance data into the original expected distance set, thereby changing the original formation diagram of the clustered robots.
CN201910218602.7A 2019-03-21 2019-03-21 Cooperative motion method of cluster robot in scene with obstacle Active CN110162035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910218602.7A CN110162035B (en) 2019-03-21 2019-03-21 Cooperative motion method of cluster robot in scene with obstacle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910218602.7A CN110162035B (en) 2019-03-21 2019-03-21 Cooperative motion method of cluster robot in scene with obstacle

Publications (2)

Publication Number Publication Date
CN110162035A CN110162035A (en) 2019-08-23
CN110162035B true CN110162035B (en) 2020-09-18

Family

ID=67638411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910218602.7A Active CN110162035B (en) 2019-03-21 2019-03-21 Cooperative motion method of cluster robot in scene with obstacle

Country Status (1)

Country Link
CN (1) CN110162035B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737263B (en) * 2019-11-21 2023-04-07 中科探海(苏州)海洋科技有限责任公司 Multi-robot formation control method based on artificial immunity
CN110908384B (en) * 2019-12-05 2022-09-23 中山大学 Formation navigation method for distributed multi-robot collaborative unknown random maze
CN111113417B (en) * 2019-12-25 2021-10-29 广东省智能制造研究所 Distributed multi-robot cooperative motion control method and system
CN111830982A (en) * 2020-07-16 2020-10-27 陕西理工大学 Mobile robot formation and obstacle avoidance control method
CN111844038B (en) * 2020-07-23 2022-01-07 炬星科技(深圳)有限公司 Robot motion information identification method, obstacle avoidance robot and obstacle avoidance system
CN114578842B (en) * 2020-11-30 2023-04-21 南京理工大学 Collaborative path planning method for unmanned vehicle-mounted rotor unmanned aerial vehicle cluster reconnaissance
CN112965482B (en) * 2021-02-01 2023-03-10 广东省科学院智能制造研究所 Multi-robot motion collision avoidance control method and system
CN113110429B (en) * 2021-04-02 2022-07-05 北京理工大学 Minimum lasting formation generation and control method of multi-robot system under visual field constraint

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898663A (en) * 2015-04-08 2015-09-09 华东交通大学 Distributed multi-robot containment collision prevention control method
KR20170016684A (en) * 2015-08-04 2017-02-14 창원대학교 산학협력단 The unmanned air vehicle for castaway tracking
CN107807671A (en) * 2017-11-27 2018-03-16 中国人民解放军陆军工程大学 Unmanned plane cluster danger bypassing method
CN108549407A (en) * 2018-05-23 2018-09-18 哈尔滨工业大学(威海) A kind of control algolithm of multiple no-manned plane collaboration formation avoidance
CN108919835A (en) * 2018-09-25 2018-11-30 北京航空航天大学 Control method, device and the controller that unmanned vehicle is formed into columns

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103995539B (en) * 2014-05-15 2016-04-20 北京航空航天大学 A kind of unmanned plane autonomous formation evaluation index and MPC formation control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104898663A (en) * 2015-04-08 2015-09-09 华东交通大学 Distributed multi-robot containment collision prevention control method
KR20170016684A (en) * 2015-08-04 2017-02-14 창원대학교 산학협력단 The unmanned air vehicle for castaway tracking
CN107807671A (en) * 2017-11-27 2018-03-16 中国人民解放军陆军工程大学 Unmanned plane cluster danger bypassing method
CN108549407A (en) * 2018-05-23 2018-09-18 哈尔滨工业大学(威海) A kind of control algolithm of multiple no-manned plane collaboration formation avoidance
CN108919835A (en) * 2018-09-25 2018-11-30 北京航空航天大学 Control method, device and the controller that unmanned vehicle is formed into columns

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于四旋翼飞行器的无人机编队飞行控制研究;李卉;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;中国学术期刊(光盘版)电子杂志社;20180515(第5期);第43-49页 *
基于领导者_跟随者策略的多轮式移动机器人编队控制方法;贾瑞明;《中国优秀硕士学位论文全文数据库 信息科技辑》;中国学术期刊(光盘版)电子杂志社;20190215(第2期);第1-57页 *
多无人机编队路径规划与队形控制技术研究;邵壮;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;中国学术期刊(光盘版)电子杂志社;20190115(第1期);第1-167页 *

Also Published As

Publication number Publication date
CN110162035A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110162035B (en) Cooperative motion method of cluster robot in scene with obstacle
Wang et al. A multilayer path planner for a USV under complex marine environments
Wen et al. Formation control with obstacle avoidance of second-order multi-agent systems under directed communication topology
Vidal et al. Probabilistic pursuit-evasion games: theory, implementation, and experimental evaluation
Zhang et al. Spill detection and perimeter surveillance via distributed swarming agents
Zhang et al. Collective behavior coordination with predictive mechanisms
Wang et al. Autonomous flights in dynamic environments with onboard vision
Xu et al. Two-layer distributed hybrid affine formation control of networked Euler–Lagrange systems
Roca et al. Emergent behaviors in the internet of things: The ultimate ultra-large-scale system
Botteghi et al. On reward shaping for mobile robot navigation: A reinforcement learning and SLAM based approach
CN113110478A (en) Method, system and storage medium for multi-robot motion planning
Liang et al. Bio-inspired self-organized cooperative control consensus for crowded UUV swarm based on adaptive dynamic interaction topology
CN115993781B (en) Network attack resistant unmanned cluster system cooperative control method, terminal and storage medium
Wu et al. Vision-based target detection and tracking system for a quadcopter
CN110908384B (en) Formation navigation method for distributed multi-robot collaborative unknown random maze
CN113759935B (en) Intelligent group formation mobile control method based on fuzzy logic
Li et al. Vg-swarm: A vision-based gene regulation network for uavs swarm behavior emergence
Sai et al. A comprehensive survey on artificial intelligence for unmanned aerial vehicles
CN111176324B (en) Method for avoiding dynamic obstacle by multi-unmanned aerial vehicle distributed collaborative formation
Yang et al. Complete coverage path planning based on bioinspired neural network and pedestrian location prediction
Sakthitharan et al. Establishing an emergency communication network and optimal path using multiple autonomous rover robots
Liang et al. Distributed cooperative control based on dynamic following interaction mechanism for UUV swarm
Wang et al. Cooperative control of robotic swarm based on self-organized method and human swarm interaction
Guruprasad et al. Autonomous UAV Object Avoidance with Floyd–Warshall Differential Evolution (FWDE) approach
Martínez Review of flocking organization strategies for robot swarms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant