CN115629600B - Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment - Google Patents

Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment Download PDF

Info

Publication number
CN115629600B
CN115629600B CN202210918432.5A CN202210918432A CN115629600B CN 115629600 B CN115629600 B CN 115629600B CN 202210918432 A CN202210918432 A CN 202210918432A CN 115629600 B CN115629600 B CN 115629600B
Authority
CN
China
Prior art keywords
trapping
robot
robots
obstacle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210918432.5A
Other languages
Chinese (zh)
Other versions
CN115629600A (en
Inventor
周萌
王子豪
王晶
王昶
史运涛
董哲
翟维枫
薛同来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Technology
Original Assignee
North China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Technology filed Critical North China University of Technology
Priority to CN202210918432.5A priority Critical patent/CN115629600B/en
Publication of CN115629600A publication Critical patent/CN115629600A/en
Application granted granted Critical
Publication of CN115629600B publication Critical patent/CN115629600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application discloses a multi-robot distributed collaborative trapping method applied to a complex dynamic security environment. Firstly, an obstacle avoidance strategy for buffering a Veno diagram is provided, and the boundary weight between a robot and an obstacle is dynamically updated, so that the buffer Veno safety area of the robot is tangent to the obstacle but does not intersect with the obstacle. The robot plans control behaviors in the safety area of the robot buffering the Veno, and collision between the robot and other obstacles is avoided. Secondly, evenly distributing the trapping points around the suspicious robots according to the number of the trapping robots, and aiming at the trapping robots, realizing optimal task matching between all the trapping robots and the trapping points according to the shortest distance principle based on a Hungary algorithm. Finally, two methods for designing the trapping control law are provided according to the real-time distance between the trapping robot and the obstacle, so that the trapping capacity is improved, and the trapping time is optimized.

Description

Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment
Technical Field
The application relates to the field of multi-robot collaborative trapping strategies in a complex dynamic environment, in particular to a multi-robot collaborative trapping method applied to a security environment.
Background
The safety problem not only determines whether the enterprise production is smooth or not, but also relates to harmony and stability of the whole society. Most of the security systems commonly used at present adopt a matching mode of a fixed camera and a manual patrol, and the mode is easy to gradually change, but has high cost and large security blind area, and depending on the manual patrol, the all-weather non-missing inspection is difficult, and the emergency situation cannot be dealt with in real time, such as emergency situation cannot be dealt with in real time.
In recent years, with the rapid development of robot technology, security robots have been widely used in security industry, and the use of security robots to assist or replace security personnel will inevitably improve the overall level of security industry.
The security system must have strong emergency processing capability, and when an emergency occurs, how to quickly respond, readjust the security policy, and improving the timeliness of the security system is one of key indexes for evaluating the security capability. The multi-machine collaborative trapping is a very important problem in the field of intelligent security, and refers to a process that a plurality of unmanned platforms adopt a certain technical means to identify, track and finally capture suspicious targets in a specific task environment. For example, when robots perform security operation in security sensitive areas such as airports, public buildings, military management areas and the like, when suspicious targets suddenly appear, how to realize rapid multi-robot collaborative trapping in a complex dynamic environment is important to ensure the security of key areas. Benda et al first proposed the problem of achieving multi-machine collaborative trapping based on a grid model in a known environment. Vidal et al combine the problems of environment exploration and trapping into a whole, and provide an unmanned aerial vehicle and unmanned aerial vehicle cooperative trapping strategy with a distributed layered structure. Cao et al propose a multi-underwater robot target finding and trapping algorithm based on dynamic prediction of moving target tracks of deep reinforcement learning. Huang Tianyun and the like are inspired by wolf-population hunting, and a cooperative hunting strategy based on loose preference rules is proposed according to self-organization thought. Alyssa Pierson proposes a distributed algorithm for cooperatively tracking multiple suspicious robots by utilizing multiple robots in a bounded convex environment based on a Veno diagram, and the algorithm is suitable for intercepting an invading unmanned aerial vehicle in a protected airspace and other applications. The tracker does not know the evade strategy, but captures all suspicious robots for a limited time by using a voronoi diagram-based global "area minimization" strategy. However, the control strategy does not consider obstacle avoidance in the presence of an obstacle, and therefore has a great limitation in practical application. Therefore, the multi-security robot collaborative trapping under the complex dynamic environment has important research significance and research value.
Disclosure of Invention
The application aims to solve the problem of multi-robot collaborative trapping in a complex dynamic security environment, and provides a multi-robot collaborative trapping strategy based on a buffer voronoi diagram. First, a plurality of robots in a known environment are divided into suspicious robots and trapping robots; secondly, based on the buffer Veno diagram construction principle, on the premise that the position information of the robots and the position information of the obstacles are known, a safe perception space without the obstacles for each robot is generated, the physical radius of each robot is considered in the area, and the obstacles surrounding the robot are dynamically weighted to be tangent to the obstacles. In each sampling time, the robot only moves in the safety sensing space of the robot, so that collision with obstacles is avoided, and dynamic obstacle avoidance of the robot in a complex security environment is realized; and setting the trapping points distributed equidistantly around each escape robot, and then distributing the trapping robots on the trapping points around the suspicious robots according to the principle that the global distance is shortest. Finally, controlling actions of the trapping robots in a recursion mode to carry out obstacle avoidance tracking on corresponding trapping points until the conditions that all trapping robots reach trapping points around the suspicious robot or the suspicious robot is blocked on corners on the boundary are met, and realizing collaborative trapping of the suspicious robot.
The application provides a multi-robot collaborative capture strategy based on a buffer voronoi diagram, which is applied to a complex dynamic security environment, and comprises the following steps:
step one: a buffer Veno diagram area is constructed according to the position information of the trapping robot, the suspicious target and the obstacle, as shown in fig. 2, and the area is defined as a robot safety activity area, so that no obstacle is ensured in the area.
Step two: when a suspicious object appears in the environment, a trapping point capable of uniformly surrounding the suspicious object is generated around the suspicious object according to the number of trapping robots and the suspicious robots. Next, real-time task allocation is performed according to the distance between the trapping robots and the trapping point targets based on the hungarian algorithm, namely, each robot is reasonably allocated to track different trapping points, so that an optimal global collaborative trapping strategy is achieved, and quick trapping of all suspicious targets is achieved, as shown in fig. 3.
Step three: for a group of robots, after the task of each trapping robot is determined, they work to ensure that trapping points near the suspicious robot are tracked in a decentralized manner while avoiding collisions between other trapping robots. I.e. each trapping robot tracks the trapping point closest to the suspicious object within its own safety zone. The application provides a method for designing a trapping controller based on a Veno buffer area according to the position relation between a trapping robot and an obstacle. On the one hand, in the voronoi safety region of each robot, it can be determined that their movement positions at the next moment are collision-free. On the other hand, when the target point is close to the obstacle while the robot is traveling, this point will rotate along the tangent of the obstacle, thereby being far from the obstacle. And then the trapping robot can also drive away from the obstacle along with the target point, so that the obstacle avoidance requirement of the robot is realized. When the trapping robot is far away from surrounding obstacles, the trapping robot moves directly towards the trapping point.
The application relates to a multi-robot collaborative trapping method applied to a security environment, which comprises the following steps:
step 1: and determining the self-induction range of the robot by giving the position information of the trapping robot, the suspicious robot and the convex polygonal obstacle in the known map.
Step 2: according to the position information of the trapping robot and the suspicious target, a maximum boundary classifier (Maximal Margin Classifier, MMC) method is adopted to calculate the hyperplane for separating the trapping robot and the obstacle in the perception range, and the hyperplane is a straight line on the two-dimensional plane, so that the straight line for dividing the trapping robot and the obstacle in the two-dimensional map is the buffer Veno area boundary adjacent to each other, as shown in fig. 4.
Step 3: and according to the position information of the trapping robot and other robots in the sensing range, solving a hyperplane for separating two different types of sample points based on the concept of a linear separator, namely, the hyperplane between two coordinate points of the trapping robot and the other robots in the sensing range. The hyperplane represents the buffer voronoi region boundary of the trapping robot adjacent to other robots, as shown in fig. 5.
Step 4: and (3) intersecting the boundaries in the steps (2) and (3) by combining the self-induction range of the robot, wherein the obtained intersection point is a vertex of a Veno buffer area, and a closed area surrounded by the vertex is the Veno buffer area, as shown in fig. 6.
According to the multi-robot trapping method applied to the security environment, in the second step, the method comprises the following further steps:
step 1: in order to prevent the occurrence of the escape of the suspicious robots, evenly distributed trapping points p are generated around the suspicious robots according to the number of trapping robots and the trapping radius i ∈{1,2...n}。
Step 2: task allocation is performed on the trapping robots, and each trapping robot designates its own trapping point as a tracking target, as shown in fig. 7, and the detailed procedure is as follows.
(1) The distances between each trapping robot and each trapping point are calculated, a distance table shown in table 1 is generated, and a real-time distance matrix can be further generated according to table 1.
TABLE 1 distance Meter between each trapping robot and each trapping point
Wherein d i,j Representing the distance between the trapping robot i and the trapping point j.
(2) And performing matrix transformation on the real-time distance matrix until at least one 0 element exists in different rows and different columns. The row and column where these 0 elements are located represents the task allocation of the trapping robot i to the trapping point j. For example when all elements of row i are d only i,2 =0, indicating the ith enclosure robot scoreTo the task of tracking the catch point 2.
According to the multi-robot trapping method applied to the security environment, in the third step, the method comprises the following further steps:
step 1: when the trapping robot is distributed with trapping tasks, a combined trapping strategy is adopted after a tracking target is determined. When the robot is close to an obstacle, the trapping robot adopts a trapping algorithm based on a buffer voronoi diagram. The robot calculates its buffered voronoi safety region within each sampling time and finds the closest target point to the trapping point inside the safety region The detailed solving steps of (a) are as follows:
(1) robot buffer voronoi safety zoneIs a convex polygon, wherein ε and e represent the buffer Veno safety region +.>Is defined as the edge and vertex of the (c). Calculating a trapping point p of the trapping robot i With its buffered vitamin safety region->The interior angle sum θ of all vertices e.
(2) If the robot is caught at the catching point p i And if the sum of the internal angles and the sum of the internal angles of all vertexes of the buffer Veno safety area are equal to 2 pi, the trapping point p of the trapping robot is represented i Inside the buffered voronoi safe region. Namely, the robot is captured from a capturing point p in a safety area of the robot i The nearest target pointAnd the trapping point p i Coincide with (I)>
(3) If the robot is caught at the catching point p i And if the sum of the internal angles theta of all vertexes in the buffer Veno safety area is not equal to 2 pi, the trapping point p of the trapping robot is represented i Outside the buffered voronoi safe region. By calculating lambda i To judge the nearest point from the target in the trapping robotIs in the robot buffering Veno safety area +.>Is also in the region of the buffer Veno security>One of the apexes e i And (5) overlapping.
(4) When lambda is i ∈[0,1]The trapping point p away from the target inside the trapping robot i The nearest pointBuffering a voronoi safety region epsilon in a robot i Edge, calculate edge ε i Upward from the point of capture p i Nearest point g p
(5) When (when)Can determine +.>Damping a voronoi safety region epsilon with a robot i One of the two vertices on the edge coincides. Then calculate the trapping points p respectively i And the side epsilon i Distance between two vertices of (a)>Coinciding with the vertex closest thereto.
(6) Synchronously calculating the target points which are closest to the target and are positioned in the i buffer Veno safety zone of other trapping robots according to the steps (1) - (5)
Step 2: when the trapping robots are close to the obstacle, after the trapping robots determine the tracked target points, calculating each trapping robot to the respective target pointThe unit direction vector of the (2) is multiplied by the maximum speed of the trapping robot, and finally the control input of the trapping robot under the trapping algorithm of the buffer Veno diagram is obtained.
Step 3: when the trapping robot is far away from the obstacle, a strategy of directly tracking the trapping point is adopted, and the trapping robot is calculated to reach the target trapping point p of the robot i The unit direction vector of the (2) is multiplied by the maximum speed of the trapping robot to obtain the direct trapping lower control input of the trapping robot.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
FIG. 1 is a frame diagram of an overall system of the present application;
FIG. 2 is a flow chart of a buffered Veno secure enclave construction;
FIG. 3 is a schematic diagram of a capture robot tracking capture points;
FIG. 4 is a schematic diagram of a super-planar view of the obstacle in the sensing range and the robot;
FIG. 5 is a schematic diagram of a robot hyperplane within the calculation and capture robot and perception range;
FIG. 6 calculates a robot buffer Veno diagram;
FIG. 7 is a flowchart of a Hungary capture task allocation algorithm;
Detailed Description
The application is described in detail below with reference to the drawings and the specific embodiments. It is noted that the aspects described below in connection with the drawings and the specific embodiments are merely exemplary and should not be construed as limiting the scope of the application in any way.
Step 1: and calculating the buffer Veno area boundary of the trapping robot adjacent to the obstacle. And establishing an objective function and solving f (x) by using constraint conditions.
Satisfy a 1 x≤b 1
Wherein,f=[0,0,0]。a 1 x≤b 1 represented as constraints of the obstacle.
Step 2: calculating buffer Veno area boundary of the trapping robot and the adjacent robot in the perception range, namely, hyperplane a between the trapping robot and the adjacent robot 2 T x=b 2
Wherein x is p And x e The coordinates of the trapping robot and the suspicious robot are represented, respectively.
Step 3: calculating hyperplane constraint between trapping robot and mapWherein,representing map range boundaries.
Step 4: calculating the intersection of the hyperplanes in the steps 1, 2 and 3, namely, the vertexes of the buffer Veno diagram of the trapping robot, wherein the area surrounded by the vertexes is the Veno buffer area of the trapping robot.
Step 5: the trapping robot performs trapping task allocation according to suspicious target positions
(1) Calculating the trapping points p evenly distributed around the suspicious object i ∈{1,2...n};
(2) And calculating the distances between all the trapping robots and the trapping points, and constructing a distance matrix D.
(3) Constructing an objective function minz= Σd ij x ij
(4) Constraint conditions from the objective function are:
wherein x is i,j Indicating whether robot i is arranged to track j the catch points. When x is i,j When=1, the trapping robot i is assigned the trapping j as the target point. When x is i,j When=0, it indicates that the trapping robot i is not assigned the trapping j as the target point.
(5) Solving for an optimal value of the objective function, firstly subtracting the minimum value in each row of elements in the distance matrix D, and then subtracting the minimum value in each column of elements.
(4) Find out only a row (column) of 0 element, mark the 0 element, walk through the column (row) where the 0 element is located, scratch out other 0 elements on the column (row). Indicating that the trapping site has been allocated. If the number of 0 elements is equal to the order n of the D matrix, the optimal solution for allocation is obtained. Otherwise, executing the step (5).
(5) Marking the row for which the D matrix did not complete the assignment and marking the row for which no columns of 0 elements were assigned draws vertical and horizontal lines for the unmarked row and marked columns, which results in a minimum number of straight line sets that can cover all zero elements.
(6) The smallest element is found in the part not covered by the drawn straight line. This minimum element is subtracted from each element that does not draw a straight line. And ending the algorithm until the number of zero elements in the matrix is equal to the order of the matrix. Otherwise, returning to the step (5).
Step 6: when the trapping robot is distributed with trapping tasks, after a tracking target is determined, two trapping strategies are adopted according to the distance between the trapping robot and the obstacle in the perception range. When the trapping robot is close to the obstacle, a trapping algorithm based on a buffer voronoi diagram is adopted. At each sampling time, the robot calculates its buffered voronoi safe region and finds the target point nearest to the target within itIn practice, the Veno safety zone +.>Is a convex polygon. Let->Representing a convex polygon in R, where epsilon is the set of convex polygon edges and e is the set of convex edge vertices. The capturing robot is located in the Veno buffer area from the capturing point p i Nearest dot->Or the point p of trapping i By itself, or at the edge epsilon i On or with vertex e i And (5) overlapping. />The detailed solving steps of (a) are as follows:
(1) calculating a trapping point p of the trapping robot i With which all vertices e in the voronoi safe region V are bufferedInterior angle and θ.
Wherein g 1 And g 2 For the edge epsilon i Is included in the image data.
(2) If the robot is caught at the catching point p i The sum of the internal angles of all vertexes of the buffer Veno safety area is equal to 2 pi, and represents a trapping point p of the trapping robot i Inside the buffered voronoi safe region. Namely, the nearest point to the target in the buffer Veno safety area of the trapping robotAnd the trapping point p i Coincide with (I)>
(3) If the robot is caught at the catching point p i And if the sum of the internal angles of all vertexes in the buffer Veno safety area is not equal to 2 pi, the trapping point p of the trapping robot is represented i Outside the buffered voronoi safe region. By calculating lambda i To judge the nearest point from the target in the trapping robotIs in the robot buffering Veno safety area +.>Or with a vertex e on the security area i And (5) overlapping.
(4) When 0 is less than or equal to lambda i When the distance between the interior of the trapping robot and the target trapping point p is less than or equal to 1, the trapping point p is shown i The nearest pointBuffering a voronoi safety region epsilon in a robot i Edge, calculate edge ε i From the point of capture p i Closest point g p
g p =(1-λ i )g 1i g 2
(5) When lambda is i <0,λ i At > 1, meansDamping a voronoi safety region epsilon with a robot i And a certain vertex on the edge is overlapped.
(6) Respectively calculating the trapping points p i Damping a voronoi safety region epsilon with a robot i Two vertices g 1 、g 2 Is used for the distance of (a),and g is equal to 1 And g 2 The closest vertices in the middle coincide.
(7) If it isRepresentation->Buffer the safety zone edge epsilon with the robot i Upper vertex g 1 The two parts are overlapped together,
(7) if it isThen indicate->Buffer the safety zone edge epsilon with the robot i Upper vertex g 2 The two parts are overlapped together,
and 5, when the trapping robot is far away from the obstacle, the trapping robot directly moves towards the trapping point. The trapping robot is far from or near to an obstacle, and the buffering Veno trapping strategy of the trapping robot is as follows:
wherein v is i,max Is the maximum speed of the robot i,the coordinates of the trapping robot i, d is the distance from the trapping obstacle, d min Is a safe distance.
And stopping tracking conditions when all the trapping robots reach trapping around the suspicious robot or the suspicious robot is blocked at corners on the boundary, and indicating successful trapping.
For simplicity of explanation of the article, the foregoing figures have been described as a series of steps, but it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may occur in different orders, and one skilled in the art may appreciate the principles of their acts.

Claims (4)

1. The method for cooperatively capturing the lower multiple robots in the security scene is characterized by comprising the following steps of:
step one: determining a plurality of cooperative robots, generating respective buffer voronoi diagram spaces based on the position information of each trapping robot and the obstacle, wherein the buffer voronoi diagram spaces are used as obstacle-free areas;
step two: firstly, defining a surrounding ring for a suspicious target, generating equal-angle trapping points on the surrounding ring according to the number of trapping robots, and then using an allocation algorithm to allocate the trapping points to all trapping robots in real time so as to achieve an optimal combined trapping strategy;
step three: according to the real-time distance between the trapping robot and surrounding obstacles, a combined trapping strategy is designed; when the distance obstacle is smaller than the safety threshold, each trapping robot tracks a trapping point closest to the suspicious target in the self safety area; in a buffer Veno safety area of each trapping robot, determining that the motion position of the trapping robot at the next moment is collision-free; when the distance obstacle is larger than the safety threshold, each trapping robot moves to the position of the robot by taking the allocated trapping as a target point;
in the second step, the implementation process of the optimal combined trapping strategy is as follows:
the following steps are circularly executed: (1) calculating a real-time distance matrix according to the positions of all the trapping points of the positions of all the trapping robots; (2) subtracting the minimum value in the row element from each row element in the distance matrix, and subtracting the minimum value in the column element from each column element; (3) finding out the row/column of only one '0' element in the real-time distance matrix, deleting other '0' elements on the same row/column as the '0', (3) judging whether the number of zero elements in the real-time distance matrix is equal to the order of the matrix, and if the number of zero elements is equal to the order of the matrix, ending the allocation; the row corresponding to the element 0 in the distance matrix represents the serial number of the trapping robot, the corresponding column represents the serial number of the trapping point allocated by the trapping robot, and if the serial numbers are not equal, the allocation is continued (4); (4) marking a row where the distance matrix does not complete the task of distributing the trapping robot, marking a column where the trapping point of the element 0 is not distributed in the row, and drawing longitudinal lines and transverse lines on the unlabeled row and the marked column; (5) finding the minimum element in the part which is not covered by the drawn straight line in the real-time distance matrix; subtracting the minimum element from each element without drawing a straight line; (6) adding the minimum element to each element of the real-time distance matrix where the horizontal line and the straight line are drawn; the distribution is finished until the number of zero elements in the real-time distance matrix is equal to the order of the matrix; the row of 0 element in the distance matrix represents the number of the trapping robot, and the column is the trapping point number of the trapping robot, namely the optimal combined trapping strategy of the trapping robot;
step three, the trapping robot adopts different trapping strategies according to the distance of the obstacle in the sensing range; when the distance between the trapping robots and the obstacle is smaller than the safety threshold, each trapping robot tracks the target point closest to the suspicious robot in the buffer Veno safety area; when the target point approaches the obstacle, the target point rotates along the tangent line of the obstacle, so as to be far away from the obstacle; the trapping robot can also drive away from the obstacle along with the target point, so that the obstacle avoidance requirement of the robot is met; when the distance between the trapping robot and the obstacle is larger than the safety threshold, the distributed trapping point is taken as a target, and the robot moves towards the trapping point directly.
2. The method for collaborative trapping of multiple robots in a security scene as set forth in claim 1, wherein in step one, an obstacle is set in a simulation environment; and randomly generating suspicious robots in the environment to simulate special conditions in a real environment.
3. The method for collaborative trapping of multiple robots in a security scene according to claim 1, wherein in the second step, trapping points of the trapping robots are selected such that when suspicious robots to be trapped occur in the environment, trapping points capable of uniformly surrounding a suspicious object are generated around the object according to the number of trapping robots and suspicious robots.
4. The method for collaborative trapping of multiple robots in a security scene according to claim 1, wherein in the second step, the trapping robots calculate distances from all trapping points to generate a distance matrix, and the hungarian algorithm assigns different target points to each robot according to a global distance shortest principle based on the distance matrix.
CN202210918432.5A 2022-08-01 2022-08-01 Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment Active CN115629600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210918432.5A CN115629600B (en) 2022-08-01 2022-08-01 Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210918432.5A CN115629600B (en) 2022-08-01 2022-08-01 Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment

Publications (2)

Publication Number Publication Date
CN115629600A CN115629600A (en) 2023-01-20
CN115629600B true CN115629600B (en) 2023-12-12

Family

ID=84903361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210918432.5A Active CN115629600B (en) 2022-08-01 2022-08-01 Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment

Country Status (1)

Country Link
CN (1) CN115629600B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116430865A (en) * 2023-04-17 2023-07-14 北方工业大学 Multi-machine collaborative trapping method under uncertain probability framework

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733074A (en) * 2018-05-23 2018-11-02 南京航空航天大学 A kind of multiple no-manned plane formation path planning method based on Hungary Algorithm
CN109079792A (en) * 2018-09-05 2018-12-25 顺德职业技术学院 A kind of target based on multirobot surrounds and seize method and system
CN111240332A (en) * 2020-01-18 2020-06-05 湖南科技大学 Multi-target enclosure method for cooperative operation of swarm robots in complex convex environment
CN113253738A (en) * 2021-06-22 2021-08-13 中国科学院自动化研究所 Multi-robot cooperation trapping method and device, electronic equipment and storage medium
CN113467508A (en) * 2021-06-30 2021-10-01 天津大学 Multi-unmanned aerial vehicle intelligent cooperative decision-making method for trapping task
CN113580129A (en) * 2021-07-19 2021-11-02 中山大学 Multi-target cooperative trapping method, device and medium based on robot
WO2021242215A1 (en) * 2020-05-26 2021-12-02 Edda Technology, Inc. A robot path planning method with static and dynamic collision avoidance in an uncertain environment
CN114326747A (en) * 2022-01-06 2022-04-12 中国人民解放军国防科技大学 Multi-target enclosure control method and device for group robots and computer equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11292132B2 (en) * 2020-05-26 2022-04-05 Edda Technology, Inc. Robot path planning method with static and dynamic collision avoidance in an uncertain environment
KR102490755B1 (en) * 2020-10-08 2023-01-25 엘지전자 주식회사 Moving robot system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733074A (en) * 2018-05-23 2018-11-02 南京航空航天大学 A kind of multiple no-manned plane formation path planning method based on Hungary Algorithm
CN109079792A (en) * 2018-09-05 2018-12-25 顺德职业技术学院 A kind of target based on multirobot surrounds and seize method and system
CN111240332A (en) * 2020-01-18 2020-06-05 湖南科技大学 Multi-target enclosure method for cooperative operation of swarm robots in complex convex environment
WO2021242215A1 (en) * 2020-05-26 2021-12-02 Edda Technology, Inc. A robot path planning method with static and dynamic collision avoidance in an uncertain environment
CN113253738A (en) * 2021-06-22 2021-08-13 中国科学院自动化研究所 Multi-robot cooperation trapping method and device, electronic equipment and storage medium
CN113467508A (en) * 2021-06-30 2021-10-01 天津大学 Multi-unmanned aerial vehicle intelligent cooperative decision-making method for trapping task
CN113580129A (en) * 2021-07-19 2021-11-02 中山大学 Multi-target cooperative trapping method, device and medium based on robot
CN114326747A (en) * 2022-01-06 2022-04-12 中国人民解放军国防科技大学 Multi-target enclosure control method and device for group robots and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Hybrid Path Planning and Formation Control Strategy of Multi-Robots in a Dynamic Environment;Meng Zhou .etal;Journal of Advanced Computational Intelligence and Intelligent Informatics;第26卷(第3期);第342-354页 *

Also Published As

Publication number Publication date
CN115629600A (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
Lin et al. A robust real-time embedded vision system on an unmanned rotorcraft for ground target following
CN108983823B (en) Plant protection unmanned aerial vehicle cluster cooperative control method
Guérin et al. Towards an autonomous warehouse inventory scheme
Waharte et al. Probabilistic search with agile UAVs
Dong et al. Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2d lidar for mobile robots
US20230305572A1 (en) Method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces
CN115629600B (en) Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment
WO2020001395A1 (en) Road pedestrian classification method and top-view pedestrian risk quantitative method in two-dimensional world coordinate system
CN112130587A (en) Multi-unmanned aerial vehicle cooperative tracking method for maneuvering target
CN112198901B (en) Unmanned aerial vehicle autonomous collision avoidance decision method based on three-dimensional dynamic collision area
CN107807671B (en) Unmanned plane cluster danger bypassing method
Hui et al. A novel autonomous navigation approach for UAV power line inspection
CN110568861A (en) Man-machine movement obstacle monitoring method, readable storage medium and unmanned machine
CN113034579A (en) Dynamic obstacle track prediction method of mobile robot based on laser data
Silva et al. Monocular trail detection and tracking aided by visual SLAM for small unmanned aerial vehicles
CN110823223A (en) Path planning method and device for unmanned aerial vehicle cluster
Liau et al. Non-metric navigation for mobile robot using optical flow
CN110147748B (en) Mobile robot obstacle identification method based on road edge detection
CN114397887B (en) Group robot aggregation control method based on three-layer gene regulation network
CN115113651A (en) Unmanned robot bureaucratic cooperative coverage optimization method based on ellipse fitting
Zeng et al. Mobile robot exploration based on rapidly-exploring random trees and dynamic window approach
Li et al. Vg-swarm: A vision-based gene regulation network for uavs swarm behavior emergence
Cherubini et al. Avoiding moving obstacles during visual navigation
CN113467462A (en) Pedestrian accompanying control method and device for robot, mobile robot and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant