CN114397887B - Group robot aggregation control method based on three-layer gene regulation network - Google Patents

Group robot aggregation control method based on three-layer gene regulation network Download PDF

Info

Publication number
CN114397887B
CN114397887B CN202111571098.2A CN202111571098A CN114397887B CN 114397887 B CN114397887 B CN 114397887B CN 202111571098 A CN202111571098 A CN 202111571098A CN 114397887 B CN114397887 B CN 114397887B
Authority
CN
China
Prior art keywords
robot
robots
target
coordinate
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111571098.2A
Other languages
Chinese (zh)
Other versions
CN114397887A (en
Inventor
范衠
石泽
马培立
朱贵杰
洪峻操
黄华兴
蔡堉伟
董朝晖
宁为博
郝志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202111571098.2A priority Critical patent/CN114397887B/en
Publication of CN114397887A publication Critical patent/CN114397887A/en
Application granted granted Critical
Publication of CN114397887B publication Critical patent/CN114397887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a group robot aggregation control method based on a three-layer gene regulation network, which comprises the following steps: the method comprises the steps of importing the relative distance between each robot and a target and the relative distance between every two robots acquired by a group robot airborne sensor into a creation layer to obtain a local coordinate system at the current moment based on a gene regulation network model, wherein the creation layer, the formation layer and the control layer are included; invoking coordinate information in a local coordinate system of all individuals at the previous moment to perform coordinate conversion, and acquiring target coordinate information, obstacle coordinate information and coordinate information of each robot in the local coordinate system at the current moment; importing target coordinate information and obstacle coordinate information into a forming layer to obtain a trapping form and acquiring a trapping control point of each robot; and importing the coordinate information and the trapping control points of each robot into a control layer, and guiding the group of robots to move and trap towards the target. The invention realizes the self-adaptive trapping of the target in the three-dimensional space of the failure of the global positioning system.

Description

Group robot aggregation control method based on three-layer gene regulation network
Technical Field
The invention relates to the technical field of group robot control, in particular to a group robot aggregation control method based on a three-layer gene regulation network.
Background
Based on a biological heuristic group control model, which becomes one of research hotspots in the field of group robot control in recent years, lwowski J et al proposes a group control algorithm of unmanned aerial vehicle based on a bird group aiming at group motion control problem research in a three-dimensional space, and only a stereo camera, a global positioning system and an inertial measurement unit are used for realizing the group unmanned aerial vehicle to trap a target, and the trapping form formed in the whole process is fixed and cannot adapt to environment change conditions; on the basis, although the gene regulation network model proposed by Meng et al can change the trapping form of the group robot, the application of the model does not cover the target self-adaptive trapping under the three-dimensional space of the failure of the global positioning system caused by electromagnetic wave interference or complex environment, and the model still has a certain limitation.
Disclosure of Invention
The invention provides a group robot aggregation control method based on a three-layer gene regulation network, which aims to solve one or more technical problems in the prior art and at least provides a beneficial selection or creation condition.
The invention provides a group robot aggregation control method based on a three-layer gene regulation network, which comprises the following steps:
when the group robots search for the target in the unknown environment, detecting the information of the surrounding environment of the target by using a sensor on the group robots to obtain the relative distance between each robot and the target and the relative distance between each two robots;
the method comprises the steps of importing the relative distance between each robot and a target and the relative distance between each two robots into a creation layer for data fusion calculation based on a gene regulation network model to obtain a local coordinate system at the current moment;
invoking coordinate information in a local coordinate system of all individuals in an unknown environment at the previous moment to perform coordinate conversion, and acquiring all individual coordinate information in the local coordinate system at the current moment, wherein the coordinate information comprises target coordinate information, obstacle coordinate information and coordinate information of each robot;
importing target coordinate information and barrier coordinate information into a forming layer to perform morphological gradient extraction to obtain the current trapping morphology of the group robots, and further obtaining trapping control points of each robot;
the coordinate information of each robot and the corresponding trapping control point are led into a control layer, and the group of robots are guided to move and trap towards the target by using a cluster control algorithm.
Further, the step of importing the relative distance between each robot and the target and the relative distance between each two robots into the creation layer to perform data fusion calculation, and the step of obtaining the local coordinate system at the current moment includes:
screening one dominant robot closest to the target from the group of robots according to the relative distance between each robot and the target, and marking the position of the dominant robot as a coordinate origin;
according to the relative distance between every two robots, selecting one adjacent robot closest to the dominant robot from other robots, and marking a straight line formed between the position of the dominant robot and the position of the adjacent robot as an X axis;
selecting two auxiliary robots in a common communication range of the main robot and the adjacent robots, and creating two planes by taking an X axis as an intersecting line so that the two auxiliary robots respectively fall in the two planes;
and determining coordinate information of the four robots according to the relative distance between each two robots of the main robot, the adjacent robot and the two auxiliary robots and simultaneously combining the included angles between the two planes, so as to obtain a local coordinate system at the current moment.
Further, the invoking coordinate conversion of the coordinate information in the local coordinate system of all the individuals in the unknown environment at the previous moment, and the obtaining of the coordinate information of all the individuals in the local coordinate system at the current moment includes:
according to the relative position change condition between the master robot and one of the auxiliary robots, constructing a coordinate rotation matrix between a local coordinate system at the current moment and a local coordinate system at the last moment;
and combining the coordinate information of all individuals in the local coordinate system at the previous moment and the coordinate rotation matrix, and calculating the coordinate information of all individuals in the local coordinate system at the current moment.
Further, the constructing a coordinate rotation matrix between the local coordinate system at the current moment and the local coordinate system at the previous moment according to the relative position change condition between the master robot and one of the auxiliary robots includes:
acquiring a coordinate difference matrix between a dominant robot and one of auxiliary robots in a local coordinate system at the previous moment, and marking the coordinate difference matrix as a first difference matrix;
acquiring a coordinate difference matrix between the dominant robot and the auxiliary robot in a local coordinate system at the current moment, and marking the coordinate difference matrix as a second difference matrix;
and determining a quaternion according to the geometric relationship between the first difference matrix and the second difference matrix, and constructing a coordinate rotation matrix between the local coordinate system at the current moment and the local coordinate system at the last moment by using the quaternion.
Further, the determining a quaternion according to the geometric relationship between the first difference matrix and the second difference matrix includes:
when line AD +line A′D′ When=0, one four is determinedThe element number is: q= [0,0];
Or when line AD +line A′D′ If not, determining a quaternion as:
Figure GDA0004201023020000031
wherein u is (line AD ×line A′D′ ) Normalized value of line A′D′ For the first difference matrix, line AD Is a second difference matrix.
Further, the calculation formula of the coordinate information of all individuals in the local coordinate system at the current moment is as follows:
P X =line A′X′ *R
wherein P is X For the coordinate information of the individual X in the local coordinate system at the current moment, line A′X′ For the coordinate difference matrix between the dominant robot a and the individual X in the local coordinate system at the last moment, the individual X is a target or obstacle or other robot than the dominant robot, the neighboring robot and the two auxiliary robots, and R is a coordinate rotation matrix.
Further, the guiding the group robots to move and capture towards the target by using the cluster control algorithm includes:
counting all robots falling within the communication range of the ith robot from i=1, and calculating an anti-collision control component of the ith robot when the ith robot moves between clusters;
acquiring a plurality of virtual robots generated by the projection of the ith robot on the surface of the detected plurality of obstacles, and calculating obstacle avoidance control components when the ith robot moves among the plurality of virtual robots;
creating a virtual target robot at the trapping control point of the ith robot, and calculating a motion control component of the ith robot moving to the virtual target robot;
fusing the anti-collision control component, the obstacle avoidance control component and the motion control component to obtain a global control component of the movement of the ith robot towards the target;
assigning i+1 to i, and repeating the steps until global control components of movement of each robot in the group of robots towards the target are obtained.
Further, the calculation formula of the global control component of the i-th robot moving toward the target is:
Figure GDA0004201023020000041
wherein u is i As a global control component of the system,
Figure GDA0004201023020000042
for anti-collision control component, +.>
Figure GDA0004201023020000043
For avoiding obstacle control component->
Figure GDA0004201023020000044
For the motion control component +.>
Figure GDA0004201023020000045
Are all regulation parameters, N i For the set of all robots that fall within the communication range of the ith robot, +.>
Figure GDA0004201023020000046
For a set of a plurality of obstacles detected by the ith robot, q j For set N i Coordinate position, q of inner jth robot i For the coordinate position of the ith robot, n i,j For the motion vectors of the ith robot to the jth robot, p j For set N i Speed value, p, of inner jth robot i For the speed value of the ith robot, Φ (x) is a smoothing potential function, iix σ Is sigma norm value, ++>
Figure GDA0004201023020000047
For the collection->
Figure GDA0004201023020000048
Coordinate position of virtual robot generated on inner kth obstacle surface, +.>
Figure GDA0004201023020000049
Is a collection
Figure GDA00042010230200000410
Speed value of virtual robot generated on inner kth obstacle surface,/for the virtual robot>
Figure GDA00042010230200000411
Motion vector, q, of virtual robot generated for ith robot to kth obstacle surface γ For the coordinate position of the virtual target robot, p λ A is the speed value of the virtual target robot i,j For the ith robot and set N i The adjacency coefficient of the inner jth robot, b i,k For the ith robot and set
Figure GDA00042010230200000412
The abutment coefficient of the inner kth obstacle.
The invention has at least the following beneficial effects: the improved three-layer gene regulation network is used, and a local coordinate system at the current moment is introduced into the creation layer of the improved three-layer gene regulation network, so that the coordinate positions of the target, the obstacle and the group robots can be automatically updated in a three-dimensional space with the failure of the global positioning system, and the group robots extracted from the formation layer can capture the self-adaptive transformation of the morphology; and comprehensively considering the movement condition of each robot among clusters, among a plurality of obstacles and to the trapping control point aiming at the surrounding strategy of each robot to the target, thereby ensuring the movement stability of the group robots.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
FIG. 1 is a schematic flow chart of a group robot aggregation control method based on a three-layer gene regulation network in an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block diagrams are depicted as block diagrams, and logical sequences are shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than the block diagrams in the system. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Referring to fig. 1, fig. 1 is a group robot aggregation control method based on a three-layer gene regulation network, which includes the following steps:
s101, when a group robot searches for a target in an unknown environment, detecting surrounding environment information of the target by using a sensor on board the group robot, and acquiring the relative distance between each robot and the target and the relative distance between each two robots;
specifically, the group robots are controlled to walk randomly in an unknown environment, and after a sensor on the group robots detects a target, the robot performs real-time interaction between the group robots on the basis of the acquired relative distance between the body and the target and the relative distance between the body and other robots, wherein the sensor on the group robots comprises at least one of an ultrasonic sensor, an odor sensor, an infrared sensor and a camera.
S102, a gene regulation network model comprises a creation layer, a formation layer and a control layer, wherein the relative distance between each robot and a target and the relative distance between every two robots are led into the creation layer to perform data fusion calculation, and a local coordinate system at the current moment is obtained;
the implementation process of the invention comprises the following steps:
(1) Screening one dominant robot A closest to the target from the group of robots according to the relative distance between each robot and the target, and marking the position of the dominant robot A as a coordinate origin O;
(2) According to the relative distance between every two robots, selecting one adjacent robot B which is closest to the dominant robot A and falls in the communication range of the adjacent robot B from other robots, marking a straight line formed between the position of the dominant robot A and the position of the adjacent robot B as an X axis, and taking the direction from the dominant robot A to the adjacent robot B as the positive direction of the X axis;
(3) Selecting two auxiliary robots (C, D) in a common communication range of the main robot A and the adjacent robot B, and creating two planes by taking an X axis as an intersecting line so that the two auxiliary robots respectively fall in the two planes;
in this step (3), first, a judgment condition is established that any one of the auxiliary robots X falls within the common communication range of the master robot a and the adjacent robot B as follows:
min(d XA +d XB )&(d XA <r)&(d XB <r)
wherein d XA For the relative distance d of the auxiliary robot X to the dominant robot A XB In order to assist the relative distance between the robot X and the adjacent robot B, r is the farthest distance between any two robots that affect each other, min (d XA +d XB ) Representation assurance (d XA +d XB ) The value of (d) is the smallest, i.e. it means that the unselected auxiliary robot X is currently closest to the dominant robot A and the neighboring robot B, and d XA <r indicates that the auxiliary robot X falls within the communication range of the dominant robot a;
secondly, selecting an auxiliary robot C and an auxiliary robot D from other robots according to the judging conditions, and creating two non-overlapping planes (namely an XOY plane and an XOY ' plane) by taking an X axis as an intersecting line, so that the auxiliary robot C falls on one side of the X axis forward direction on the XOY plane (excluding the Y axis, which is mutually perpendicular to the point O) and one side of the auxiliary robot D falls on the X axis forward direction on the XOY ' plane (excluding the Y ' axis, which is mutually perpendicular to the point O), wherein the XOY plane belongs to a local coordinate system at the current moment.
(4) And determining the coordinate information of the four robots according to the relative distance between each two robots in the main robot A, the adjacent robot B and the two auxiliary robots (C, D) and combining the included angles between the two planes at the same time, so as to obtain a local coordinate system at the current moment.
In this step (4), it is first determined that the two-dimensional coordinates of the dominant robot A in both the XOY plane and the XOY 'plane are A (0, 0) and the two-dimensional coordinates of the neighboring robot B in both the XOY plane and the XOY' plane are B (0, d) AB ) Simultaneously, the two-dimensional coordinates of the auxiliary robot C on the XOY plane and the two-dimensional coordinates of the auxiliary robot D on the XOY' plane are calculated as follows:
Figure GDA0004201023020000061
in the formula, the angle CAB is the angle between the straight line from the auxiliary robot C to the dominant robot A and the X axis, and the angle DAB is the angle between the straight line from the auxiliary robot D to the dominant robot A and the X axis;
secondly, the plane equation of the XOY plane can be determined to be A according to the two-dimensional coordinates of the dominant robot A, the adjacent robot B and the auxiliary robot C 1 x+B 1 y+C 1 z+D 1 =0, and the plane equation that can determine the XOY' plane from the two-dimensional coordinates of the master robot a, the adjacent robot B, and the auxiliary robot D is a 2 x+B 2 y+C 2 z+D 2 =0, and the cosine value of the included angle between the XOY plane and the XOY' plane is obtained as follows:
Figure GDA0004201023020000071
wherein A is 1 、B 1 、C 1 、D 1 、A 2 、B 2 、C 2 、D 2 All are parameters which can be solved by substituting each two-dimensional coordinate into a corresponding plane equation;
finally, according to the two-dimensional coordinates of the master robot A, the adjacent robot B and the two auxiliary robots (C, D) and the cosine value of the included angle, the coordinate information of the four robots in the local coordinate system at the current moment is determined as follows: a (0, 0), B (0, d) AB ,0)、
Figure GDA0004201023020000072
Figure GDA0004201023020000073
Thus, a local coordinate system at the current moment can be constructed.
S103, invoking coordinate information in a local coordinate system of all individuals in an unknown environment at the last moment to perform coordinate conversion, and acquiring all individual coordinate information in the local coordinate system at the current moment, wherein the coordinate information comprises target coordinate information, obstacle coordinate information and coordinate information of each robot;
the implementation process of the invention comprises the following steps:
(1) According to the relative position change condition between the master robot and one of the auxiliary robots, constructing a coordinate rotation matrix between a local coordinate system at the current moment and a local coordinate system at the last moment;
in the step (1), a coordinate difference matrix between the dominant robot A and one of the auxiliary robots (the auxiliary robot D is selected here) in the local coordinate system at the previous moment is firstly obtained and recorded as a first difference matrix line A′D′ Correspondingly, a coordinate difference matrix between the dominant robot A and the auxiliary robot D in the local coordinate system at the current moment is acquired and is recorded as a second difference matrix line AD
Second, according to the first difference matrix line A′D′ And a second difference matrix line AD The geometric relationship between them determines a quaternion, in particularThe expression is as follows: when line AD +line A′D′ When=0, the quaternion q= [0,0]The method comprises the steps of carrying out a first treatment on the surface of the Or when line AD +line A′D′ When not equal to 0, the quaternion q=cos [ arccos (line AD *line A′D′ )/2*sin[arccos(line AD *line A′D′ ) /2*u, wherein u is (line AD ×line A′D′ ) Is a normalized value of (2);
finally, constructing a coordinate rotation matrix R between the local coordinate system at the current moment and the local coordinate system at the last moment by using the quaternion q as follows:
Figure GDA0004201023020000081
wherein q 1 、q 2 、q 3 、q 4 Are parameters in quaternion q.
(2) Combining the coordinate information of all individuals in the local coordinate system at the previous moment and the coordinate rotation matrix, and calculating the coordinate information of all individuals in the local coordinate system at the current moment as follows:
P X =line A′X′ *R
wherein P is X For the coordinate information of the individual X in the local coordinate system at the current moment, line A′X′ In order to obtain a coordinate difference matrix between the dominant robot a and the individual X in the local coordinate system at the previous time, the individual X is a target or an obstacle or other robots except the dominant robot, the neighboring robot, and the two auxiliary robots, and the target coordinate information, the obstacle coordinate information, and the coordinate information of each robot can be obtained by using the formula.
S104, importing target coordinate information and obstacle coordinate information into a forming layer to perform morphological gradient extraction to obtain the current trapping form of the group robots, and further obtaining trapping control points of each robot;
specifically, first, according to the target coordinate information and the obstacle coordinate information, a concentration gradient space containing the obstacle information is generated as follows:
Figure GDA0004201023020000082
wherein T is the morphological gradient generated by all targets, N T For the target total number, T i The concentration gradient created for the ith target,
Figure GDA0004201023020000083
is T i Second derivative of the concentration gradient space (+.>
Figure GDA0004201023020000084
Laplacian operator), gamma i For the coordinate information of the ith target, O is the morphological gradient generated by all obstacles, N o To the total number of obstacles, O j The concentration gradient created for the jth obstruction,
Figure GDA0004201023020000085
is O j Second derivative of the concentration gradient space, beta j For the coordinate information of the jth obstacle, M is a morphological gradient space formed under the condition of considering the target and the obstacle, k and θ are regulation parameters, t is time, wherein a formula about dM/dt is expressed as a concentration gradient space containing the obstacle information obtained through XNOR exclusive OR gate model processing;
secondly, extracting equipotential lines from a concentration gradient space, and generating a current trapping form of the group robot according to the form of the equipotential lines; and finally, uniformly sampling on the current trapping form of the group robots according to the distribution quantity of the group robots to obtain trapping control points of each robot, namely M robots correspond to M trapping control points.
S105, guiding the coordinate information of each robot and the corresponding trapping control points into a control layer, and guiding the group robots to move and trap towards the target by using a cluster control algorithm.
In the embodiment of the invention, an Olfati-saber algorithm is used as a cluster control algorithm in a control layer, and the specific implementation process comprises the following steps:
(1) Counting all robots falling within the communication range of the ith robot from i=1, and calculating anti-collision control components of the ith robot when the ith robot moves between clusters
Figure GDA0004201023020000091
The method comprises the following steps:
Figure GDA0004201023020000092
in the method, in the process of the invention,
Figure GDA0004201023020000093
to regulate and control parameters, N i Q is the set of all robots that fall within the communication range of the ith robot j For set N i Coordinate position, q of inner jth robot i For the coordinate position of the ith robot, n i,j A is the motion vector of the ith robot to the jth robot i,j For the ith robot and set N i Adjacent coefficient, p, of inner jth robot j For set N i Speed value, p, of inner jth robot i For the speed value of the ith robot, Φ (x) is a smoothing potential function, iix σ Is sigma norm value;
the following are further developed for the above parameters:
A. smooth potential function:
Figure GDA0004201023020000094
B. sigma norm value:
Figure GDA0004201023020000095
C. motion vector:
Figure GDA0004201023020000096
D. abutment coefficient: a, a i,j =ρh(||q j -q i || σ /‖r‖ σ )
E. Associated scalar concave-convex functions:
Figure GDA0004201023020000097
wherein x is a custom variable, r is the furthest distance between any two robots, epsilon, a, b and c are constants, and h is a set critical value.
(2) Acquiring a plurality of virtual robots generated by the projection of the ith robot on the detected surfaces of a plurality of obstacles, and calculating obstacle avoidance control components when the ith robot moves among the plurality of virtual robots
Figure GDA0004201023020000098
The method comprises the following steps:
Figure GDA0004201023020000101
in the method, in the process of the invention,
Figure GDA0004201023020000102
for regulating parameters, ->
Figure GDA0004201023020000103
For the set of multiple obstacles detected by the ith robot, +.>
Figure GDA0004201023020000104
For the collection->
Figure GDA0004201023020000105
Coordinate position of virtual robot generated on inner kth obstacle surface, +.>
Figure GDA0004201023020000106
Motion vector of virtual robot generated for ith robot to kth obstacle surface, b i,k For the ith robot and set +.>
Figure GDA0004201023020000107
Adjacent coefficient of inner kth obstacle, < ->
Figure GDA0004201023020000108
For the collection->
Figure GDA0004201023020000109
A speed value of the virtual robot generated on the surface of the inner kth obstacle;
the following are further developed for the above parameters:
A. smooth potential function:
Figure GDA00042010230200001010
B. motion vector:
Figure GDA00042010230200001011
C. abutment coefficient:
Figure GDA00042010230200001012
wherein d is the safe distance from any robot to the obstacle.
(3) Creating a virtual target robot at the trapping control point of the ith robot, and calculating a motion control component of the ith robot moving to the virtual target robot
Figure GDA00042010230200001013
The method comprises the following steps:
Figure GDA00042010230200001014
in the method, in the process of the invention,
Figure GDA00042010230200001015
are all regulation parameters, q γ For the coordinate position of the virtual target robot, p λ A speed value for the virtual target robot;
(4) Fusing the anti-collision control component, the obstacle avoidance control component and the motion control component to obtain a global control component u of the movement of the ith robot towards the target i (i.e., the total acceleration generated by the ith robot moving toward the target at the current time) is:
Figure GDA00042010230200001016
(5) Assigning i+1 to i, and repeating the steps (1) - (4) until a global control component of movement of each robot in the group toward the target is obtained.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a central processor, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and may include any information delivery media.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (5)

1. The group robot aggregation control method based on the three-layer gene regulation network is characterized by comprising the following steps of:
when the group robots search for the target in the unknown environment, detecting the information of the surrounding environment of the target by using a sensor on the group robots to obtain the relative distance between each robot and the target and the relative distance between each two robots;
the method comprises the steps of importing the relative distance between each robot and a target and the relative distance between each two robots into a creation layer for data fusion calculation based on a gene regulation network model to obtain a local coordinate system at the current moment;
invoking coordinate information in a local coordinate system of all individuals in an unknown environment at the previous moment to perform coordinate conversion, and acquiring all individual coordinate information in the local coordinate system at the current moment, wherein the coordinate information comprises target coordinate information, obstacle coordinate information and coordinate information of each robot;
importing target coordinate information and barrier coordinate information into a forming layer to perform morphological gradient extraction to obtain the current trapping morphology of the group robots, and further obtaining trapping control points of each robot;
the coordinate information of each robot and the corresponding trapping control point are imported into a control layer, and the group robots are guided to move towards the target for trapping by using a cluster control algorithm;
the step of introducing the relative distance between each robot and the target and the relative distance between each two robots into the creation layer to perform data fusion calculation, and the step of obtaining the local coordinate system at the current moment comprises the following steps:
screening one dominant robot closest to the target from the group of robots according to the relative distance between each robot and the target, and marking the position of the dominant robot as a coordinate origin;
according to the relative distance between every two robots, selecting one adjacent robot closest to the dominant robot from other robots, and marking a straight line formed between the position of the dominant robot and the position of the adjacent robot as an X axis;
selecting two auxiliary robots in a common communication range of the main robot and the adjacent robots, and creating two planes by taking an X axis as an intersecting line so that the two auxiliary robots respectively fall in the two planes;
determining coordinate information of the four robots according to the relative distance between each two robots of the master robot, the adjacent robot and the two auxiliary robots and combining the included angles between the two planes at the same time, so as to obtain a local coordinate system at the current moment;
the step of calling the coordinate information of all individuals in the unknown environment in the local coordinate system at the last moment to perform coordinate conversion, and the step of obtaining the coordinate information of all individuals in the local coordinate system at the current moment includes:
according to the relative position change condition between the master robot and one of the auxiliary robots, constructing a coordinate rotation matrix between a local coordinate system at the current moment and a local coordinate system at the last moment;
combining the coordinate information of all individuals in the local coordinate system at the previous moment and the coordinate rotation matrix, and calculating the coordinate information of all individuals in the local coordinate system at the current moment;
the guiding the group robots to move and capture towards the target by using the cluster control algorithm comprises the steps of:
counting all robots falling within the communication range of the ith robot from i=1, and calculating an anti-collision control component of the ith robot when the ith robot moves between clusters;
acquiring a plurality of virtual robots generated by the projection of the ith robot on the surface of the detected plurality of obstacles, and calculating obstacle avoidance control components when the ith robot moves among the plurality of virtual robots;
creating a virtual target robot at the trapping control point of the ith robot, and calculating a motion control component of the ith robot moving to the virtual target robot;
fusing the anti-collision control component, the obstacle avoidance control component and the motion control component to obtain a global control component of the movement of the ith robot towards the target;
assigning i+1 to i, and repeating the steps until global control components of movement of each robot in the group of robots towards the target are obtained.
2. The method for controlling aggregation of group robots based on a three-layer gene regulation network according to claim 1, wherein the constructing a coordinate rotation matrix between the local coordinate system at the current time and the local coordinate system at the previous time according to the change of the relative position between the master robot and one of the auxiliary robots comprises:
acquiring a coordinate difference matrix between a dominant robot and one of auxiliary robots in a local coordinate system at the previous moment, and marking the coordinate difference matrix as a first difference matrix;
acquiring a coordinate difference matrix between the dominant robot and the auxiliary robot in a local coordinate system at the current moment, and marking the coordinate difference matrix as a second difference matrix;
and determining a quaternion according to the geometric relationship between the first difference matrix and the second difference matrix, and constructing a coordinate rotation matrix between the local coordinate system at the current moment and the local coordinate system at the last moment by using the quaternion.
3. The method of claim 2, wherein determining a quaternion based on a geometric relationship between the first difference matrix and the second difference matrix comprises:
when line AD +line A′D′ When=0, a quaternion is determined as: q= [0,0];
Or when line AD +line A′D′ If not, determining a quaternion as:
Figure FDA0004201023010000031
wherein u is (line AD ×line A′D′ ) Normalized value of line A′D′ For the first difference matrix, line AD Is a second difference matrix.
4. The group robot aggregation control method based on the three-layer gene regulation network according to claim 1, wherein the calculation formula of the coordinate information of all individuals in the local coordinate system at the current moment is:
P X =hne A′X′ *R
wherein P is X For the coordinate information of the individual X in the local coordinate system at the current moment, line A′X′ For the coordinate difference matrix between the dominant robot a and the individual X in the local coordinate system at the last moment, the individual X is a target or obstacle or other robot than the dominant robot, the neighboring robot and the two auxiliary robots, and R is a coordinate rotation matrix.
5. The method for controlling aggregation of group robots based on a three-layer gene regulation network according to claim 1, wherein the calculation formula of the global control component of the movement of the ith robot toward the target is:
Figure FDA0004201023010000032
wherein u is i As a global control component of the system,
Figure FDA0004201023010000033
for anti-collision control component, +.>
Figure FDA0004201023010000034
For avoiding obstacle control component->
Figure FDA0004201023010000035
For the motion control component +.>
Figure FDA0004201023010000036
Are all regulation parameters, N i For the set of all robots that fall within the communication range of the ith robot, +.>
Figure FDA0004201023010000037
For a set of a plurality of obstacles detected by the ith robot, q j For set N i Coordinate position, q of inner jth robot i For the coordinate position of the ith robot, n i , j For the motion vectors of the ith robot to the jth robot, p j For set N i Speed value, p, of inner jth robot i For the speed value of the ith robot, Φ (x) is a smoothing potential function, iix σ Is sigma norm value, ++>
Figure FDA0004201023010000038
For the collection->
Figure FDA0004201023010000039
Coordinate position of virtual robot generated on inner kth obstacle surface, +.>
Figure FDA00042010230100000310
For the collection->
Figure FDA00042010230100000311
Speed value of virtual robot generated on inner kth obstacle surface,/for the virtual robot>
Figure FDA00042010230100000312
Motion vector, q, of virtual robot generated for ith robot to kth obstacle surface γ For the coordinate position of the virtual target robot, p λ A is the speed value of the virtual target robot i,j For the ith robot and set N i The adjacency coefficient of the inner jth robot, b i,k For the ith robot and set +.>
Figure FDA0004201023010000041
The abutment coefficient of the inner kth obstacle. />
CN202111571098.2A 2021-12-21 2021-12-21 Group robot aggregation control method based on three-layer gene regulation network Active CN114397887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111571098.2A CN114397887B (en) 2021-12-21 2021-12-21 Group robot aggregation control method based on three-layer gene regulation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111571098.2A CN114397887B (en) 2021-12-21 2021-12-21 Group robot aggregation control method based on three-layer gene regulation network

Publications (2)

Publication Number Publication Date
CN114397887A CN114397887A (en) 2022-04-26
CN114397887B true CN114397887B (en) 2023-06-06

Family

ID=81227571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111571098.2A Active CN114397887B (en) 2021-12-21 2021-12-21 Group robot aggregation control method based on three-layer gene regulation network

Country Status (1)

Country Link
CN (1) CN114397887B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150784B (en) * 2022-09-02 2022-12-06 汕头大学 Unmanned aerial vehicle cluster area coverage method and device based on gene regulation and control network
CN116339351B (en) * 2023-05-29 2023-09-01 汕头大学 Gene regulation network-based intelligent agent cluster area coverage method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415425A (en) * 2018-02-08 2018-08-17 东华大学 It is a kind of that swarm algorithm is cooperateed with based on the Distributed Cluster robot for improving gene regulatory network
CN110262566A (en) * 2019-06-24 2019-09-20 中国人民解放军国防科技大学 Collaboration-based gene regulation method and network
WO2020058732A1 (en) * 2018-09-21 2020-03-26 Cambridge Enterprise Limited Polarised three-dimensional cellular aggregates
CN112462779A (en) * 2020-11-30 2021-03-09 汕头大学 Group robot dynamic capture control method and system based on gene regulation network
CN112527012A (en) * 2020-11-30 2021-03-19 汕头大学 Method and system for controlling cluster surrounding tasks of centerless robot
CN112684700A (en) * 2020-11-30 2021-04-20 汕头大学 Multi-target searching and trapping control method and system for swarm robots
CN113172626A (en) * 2021-04-30 2021-07-27 汕头大学 Intelligent robot group control method based on three-dimensional gene regulation and control network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019204670A2 (en) * 2018-04-18 2019-10-24 2Key New Economics Ltd. Decentralized protocol for maintaining cryptographically proven multi-step referral networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108415425A (en) * 2018-02-08 2018-08-17 东华大学 It is a kind of that swarm algorithm is cooperateed with based on the Distributed Cluster robot for improving gene regulatory network
WO2020058732A1 (en) * 2018-09-21 2020-03-26 Cambridge Enterprise Limited Polarised three-dimensional cellular aggregates
CN110262566A (en) * 2019-06-24 2019-09-20 中国人民解放军国防科技大学 Collaboration-based gene regulation method and network
CN112462779A (en) * 2020-11-30 2021-03-09 汕头大学 Group robot dynamic capture control method and system based on gene regulation network
CN112527012A (en) * 2020-11-30 2021-03-19 汕头大学 Method and system for controlling cluster surrounding tasks of centerless robot
CN112684700A (en) * 2020-11-30 2021-04-20 汕头大学 Multi-target searching and trapping control method and system for swarm robots
CN113172626A (en) * 2021-04-30 2021-07-27 汕头大学 Intelligent robot group control method based on three-dimensional gene regulation and control network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Online Planning-based Gene Regulatory Network for Swarm in Constrained Environment;《2021 7th International Conference on Big Data and Information Analytics (BigDIA)》;《2021 7th International Conference on Big Data and Information Analytics (BigDIA)》;第464-471页 *
Pattern Formation in Constrained Environments: A Swarm Robot Target Trapping Method;Xingguang Peng etal.;《2016 International Conference on Advanced Robotics and Mechatronics (ICARM)》;第455-460页 *
基于合作自主定位的群体模式自动生成方法;范衡等;《汕头大学学报(自然科学版)》;第14-28页 *
基于在线调整的群体聚合形态动态生成方法;范衠等;《流体测量与控制》;第1-8页 *
基于生物智能算法的群体机器人协同控制;杨斌;《中国博士学位论文全文数据库信息科技辑》;第I140-135页 *

Also Published As

Publication number Publication date
CN114397887A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
CN114397887B (en) Group robot aggregation control method based on three-layer gene regulation network
CN107179768B (en) Obstacle identification method and device
Hoppe et al. Photogrammetric camera network design for micro aerial vehicles
CN108983823B (en) Plant protection unmanned aerial vehicle cluster cooperative control method
TW201732739A (en) Object-focused active three-dimensional reconstruction
Weon et al. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
CN106780484A (en) Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN113172626B (en) Intelligent robot group control method based on three-dimensional gene regulation and control network
US20200300639A1 (en) Mobile robots to generate reference maps for localization
CN111766783B (en) Cluster system-oriented formation enclosure tracking method capable of converging in limited time
CN111457923B (en) Path planning method, device and storage medium
Liu et al. 2D object localization based point pair feature for pose estimation
Silva et al. Monocular trail detection and tracking aided by visual SLAM for small unmanned aerial vehicles
Cocoma-Ortega et al. Towards high-speed localisation for autonomous drone racing
Gómez-Huélamo et al. Real-time bird’s eye view multi-object tracking system based on fast encoders for object detection
CN110749325B (en) Flight path planning method and device
Angeli et al. 2d simultaneous localization and mapping for micro air vehicles
Xu et al. A vision-only relative distance calculation method for multi-UAV systems
KR101107735B1 (en) Camera pose decision method
CN115629600B (en) Multi-machine collaborative trapping method based on buffer Wino diagram in complex dynamic security environment
Sahdev Free space estimation using occupancy grids and dynamic object detection
WO2021131990A1 (en) Information processing device, information processing method, and program
Al-Shanoon et al. Deepnet-based 3d visual servoing robotic manipulation
Teng et al. Research of 6‐DOF pose estimation in stacked scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant