CN113110517A - Multi-robot collaborative search method based on biological elicitation in unknown environment - Google Patents

Multi-robot collaborative search method based on biological elicitation in unknown environment Download PDF

Info

Publication number
CN113110517A
CN113110517A CN202110564769.6A CN202110564769A CN113110517A CN 113110517 A CN113110517 A CN 113110517A CN 202110564769 A CN202110564769 A CN 202110564769A CN 113110517 A CN113110517 A CN 113110517A
Authority
CN
China
Prior art keywords
neural network
robot
grid
control input
dimensional biological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110564769.6A
Other languages
Chinese (zh)
Other versions
CN113110517B (en
Inventor
张方方
陈波
曹家晖
张文丽
赵鹏博
彭金柱
辛健斌
王东署
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202110564769.6A priority Critical patent/CN113110517B/en
Publication of CN113110517A publication Critical patent/CN113110517A/en
Application granted granted Critical
Publication of CN113110517B publication Critical patent/CN113110517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Feedback Control In General (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-robot collaborative search method based on biological elicitation in an unknown environment, S1, taking the whole multi-robot as a system and recording as MRS; each robot is regarded as a subsystem and is marked as RS; s2, establishing a grid map, wherein each grid has three states: presence of a target, absence of an obstacle and a target, presence of an obstacle; each robot acquires the surrounding environment information by using a carried sensor and updates the state of the grid map; s3, establishing a two-dimensional biological heuristic neural network based on the grid map, wherein each neuron corresponds to a grid and has a matched neuron activity value
Figure 100004_DEST_PATH_IMAGE001
(ii) a S4, combining the two-dimensional biological heuristic neural network with the state of the grid map; s5, initializing neuron activity value
Figure 100004_DEST_PATH_IMAGE003
And the number of RS exercise steps
Figure 101788DEST_PATH_IMAGE004
(ii) a And S6, carrying out iterative cooperative decision among the RSs, and determining the grid to which each RS moves next. According to the invention, through iterative cooperative decision among MRS, no mutual collision among RSs is ensured, and the cooperative performance among robots is greatly improved.

Description

Multi-robot collaborative search method based on biological elicitation in unknown environment
Technical Field
The invention relates to a robot area coverage searching method, in particular to a multi-robot collaborative searching method based on biological elicitation in an unknown environment.
Background
With the development of robotics, mobile robots have replaced humans to accomplish some specific tasks,
area coverage search is an important aspect thereof. The problem of area coverage search in an unknown environment is widely applied to the fields of unmanned aerial vehicle reconnaissance, search of trapped personnel after disasters and the like. By "unknown environment" is meant that the distribution of search targets and obstacles in the task search area is unknown, but the boundaries of the search area are known. Multi-robot systems (MRS) are a hot spot of current research with a high degree of parallelism, robustness and collaboration in performing area coverage search tasks compared to single robots that are limited by individual working capabilities.
When MRS executes the area coverage search task in an unknown environment, on one hand, all robots are required to mutually cooperate to obtain environment information, and the task area is searched with the maximum coverage rate; on the other hand, it is also required that: (1) the detection range of a sensor carried by a single robot is very limited relative to the area size of a task search area; (2) all robots do not have environment prior information, and obstacles and targets can be found only when the obstacles and the targets appear in the detection range of a sensor carried by the robots; (3) the robots must be able to avoid obstacles and avoid collisions between the robots in real time. Therefore, the robot has to decide the next search path in real time according to the update of the environmental information.
At present, barrier factors are not considered in the existing robot searching environment model, the collaboration is poor when multiple robots cover searching, and the problem that local optimization is easy to happen in the later searching stage is solved.
Disclosure of Invention
The invention aims to provide a multi-robot collaborative search method based on biological elicitation in an unknown environment, so that the whole search task area can be quickly and completely covered.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a multi-robot collaborative search method based on biological elicitation in an unknown environment, which comprises the following steps:
s1, regarding the whole multi-robot as a system and marking as MRS; each robot is regarded as a subsystem and is marked as RS;
s2, firstly, establishing a grid map, and dividing the task search area into
Figure 762473DEST_PATH_IMAGE001
Grids of the same area, each of said grids having three states: presence of a target, absence of an obstacle and a target, presence of an obstacle; each robot acquires surrounding environment information by using a carried sensor (an ultrasonic sensor detects an obstacle, and an infrared sensor detects a target object) and updates the state of a grid map;
s3, establishing a two-dimensional biological enlightening neural network based on the grid map, wherein each neuron in the two-dimensional biological enlightening neural network corresponds to a grid and has a matched neuron activity value
Figure 650795DEST_PATH_IMAGE002
(ii) a Said spiritThe magnitude of the channel activity value Q depends on the external stimulus signal
Figure 205404DEST_PATH_IMAGE003
S4, combining the two-dimensional biological heuristic neural network with the state of the grid map, namely: external stimulation signals corresponding to grid neurons in which the target exists
Figure 495571DEST_PATH_IMAGE004
For the excitation signal, the external stimulation signal corresponding to the grid neuron with the obstacle exists
Figure 378732DEST_PATH_IMAGE005
To suppress the signal;
s5, initializing neuron activity value
Figure 551087DEST_PATH_IMAGE006
And the number of RS exercise steps
Figure 909387DEST_PATH_IMAGE007
(initially)
Figure 319640DEST_PATH_IMAGE008
) Each RS updates the corresponding neuron activity value according to the state of each grid in the detection range of each RS
Figure 370773DEST_PATH_IMAGE009
(assume a single robot detection range of
Figure 968107DEST_PATH_IMAGE010
A grid), the activity value of the grid neuron outside the detection range is unchanged;
and S6, after the updating is finished, carrying out iterative cooperative decision among the RSs, and thus determining the grid to which each RS moves next.
In S6, the step of determining which grid each RS moves to next is:
s6.1, determining the MRS iteration decision sequence: first decision making machineRobot is recorded as
Figure 802202DEST_PATH_IMAGE011
And the second robot to make a decision is recorded as
Figure 798453DEST_PATH_IMAGE012
Similarly, the last iteration of the robot is noted as
Figure 754907DEST_PATH_IMAGE013
S6.2, the
Figure 901855DEST_PATH_IMAGE014
And (4) making a decision: a DMPC (distributed model predictive control) method is introduced for decision making, and specifically: by prediction
Figure 274061DEST_PATH_IMAGE015
(Future)
Figure 127748DEST_PATH_IMAGE016
The position state of the step, and the two-dimensional biological heuristic neural network is obtained based on the current two-dimensional biological heuristic neural network
Figure 51842DEST_PATH_IMAGE017
Step-by-step cumulative search performance function
Figure 889348DEST_PATH_IMAGE018
The search performance function was solved by optimization using MATLAB (a tool kit of genetic algorithms available from American Co., Ltd.) carried by itself
Figure 384352DEST_PATH_IMAGE019
Maximum, and thus predicted future
Figure 358124DEST_PATH_IMAGE020
Optimal direction of motion control input for a step
Figure 656382DEST_PATH_IMAGE021
(ii) a After that time, the user can use the device,
Figure 653288DEST_PATH_IMAGE022
copying the current two-dimensional biological heuristic neural network state and according to the future
Figure 695193DEST_PATH_IMAGE023
Updating the copied two-dimensional biological heuristic neural network by the control input of step prediction to obtain a virtual two-dimensional biological heuristic neural network for decision making, and sending the virtual two-dimensional biological heuristic neural network to the control input of step prediction
Figure 320209DEST_PATH_IMAGE024
(ii) a Wherein:
Figure 789368DEST_PATH_IMAGE025
to represent
Figure 332957DEST_PATH_IMAGE026
The control input of the current k-th step,
Figure 116237DEST_PATH_IMAGE027
to represent
Figure 799022DEST_PATH_IMAGE028
First, the
Figure 501399DEST_PATH_IMAGE029
The control input of the step(s) is,
Figure 676159DEST_PATH_IMAGE030
to represent
Figure 122184DEST_PATH_IMAGE031
First, the
Figure 662405DEST_PATH_IMAGE032
Step control input;
s6.3, performing iterative decision of the intermediate robot:
the above-mentioned
Figure 411050DEST_PATH_IMAGE033
Receive to
Figure 666582DEST_PATH_IMAGE034
After the sent virtual two-dimensional biological heuristic neural network, the self is solved based on the optimization of the virtual biological heuristic neural network
Figure 853981DEST_PATH_IMAGE035
Optimal direction of motion control input for a step
Figure 573675DEST_PATH_IMAGE036
And on their own
Figure 555537DEST_PATH_IMAGE037
Step prediction control input, updating the received virtual two-dimensional biological heuristic neural network to obtain a new virtual two-dimensional biological heuristic neural network, and sending the new virtual two-dimensional biological heuristic neural network to the user
Figure 563945DEST_PATH_IMAGE038
(ii) a Iterate until
Figure 958629DEST_PATH_IMAGE039
Finishing the decision; wherein:
Figure 736093DEST_PATH_IMAGE040
to represent
Figure 951173DEST_PATH_IMAGE041
The control input of the current k-th step,
Figure 915718DEST_PATH_IMAGE042
to represent
Figure 382603DEST_PATH_IMAGE043
First, the
Figure 14572DEST_PATH_IMAGE044
The control input of the step(s) is,
Figure 352886DEST_PATH_IMAGE045
to represent
Figure 132623DEST_PATH_IMAGE046
First, the
Figure 199936DEST_PATH_IMAGE047
Step control input;
s6.4, updating the state of the two-dimensional biological heuristic neural network: each RS is based on the solved
Figure 358516DEST_PATH_IMAGE048
First step of step predictive motion control input
Figure 118662DEST_PATH_IMAGE049
Move to
Figure 323378DEST_PATH_IMAGE050
Step one, updating a two-dimensional biological heuristic neural network;
s6.5 if area coverage or RS maximum number of movement steps
Figure 256699DEST_PATH_IMAGE051
If the set threshold value is not reached, returning to S6.1, otherwise, ending the MRS searching process.
The method has the advantage that by introducing the DMPC method, the phenomenon that MRS (robot system) falls into a local 'dead zone' in the later searching stage and cannot continuously detect an unsearched area is avoided. Meanwhile, through iterative cooperative decision among MRS, the RS (single robot subsystem) is ensured not to collide with each other and can effectively avoid obstacles, the coverage rate of the area is maximized, the repeated search of the same area is reduced, and the cooperative performance among robots is greatly improved.
Drawings
FIG. 1 is a flow chart of the multi-robot collaborative search method of the present invention.
FIG. 2 is a diagram of a two-dimensional biological heuristic neural network of the present invention; in the figure:
Figure 1277DEST_PATH_IMAGE052
represents the network of
Figure 932324DEST_PATH_IMAGE053
The number of the nerve cells is one,
Figure 421074DEST_PATH_IMAGE054
representing neurons
Figure 830189DEST_PATH_IMAGE055
The radius of influence of (a) is,
Figure 963362DEST_PATH_IMAGE056
representing neurons
Figure 330889DEST_PATH_IMAGE057
And adjacent neurons
Figure 981969DEST_PATH_IMAGE058
The connection weight coefficient of (2).
FIG. 3 is a schematic diagram of 8 optional movement directions of the robot in different positions; in the figure: 1-8 respectively indicate the moving direction of the robot as
Figure 257092DEST_PATH_IMAGE059
Fig. 4 is an exemplary graph of the MRS motion trace of the experimental region of 20 × 20 grid according to the present invention.
FIG. 5 is a graph comparing the average coverage rate curves of the method of the present invention and a search method using a gradient decreasing principle planning.
Detailed Description
The following describes embodiments of the present invention in detail with reference to the drawings, which are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are provided, but the scope of the present invention is not limited to the following embodiments.
As shown in FIG. 1, the multi-robot collaborative search method based on biological elicitation in the unknown environment of the invention comprises the following steps:
s1, mixingThe whole multi-robot is regarded as a system and is marked as MRS, and each robot is regarded as a subsystem and is marked as RS; first, the
Figure 838247DEST_PATH_IMAGE060
Personal robot subsystem as
Figure 517621DEST_PATH_IMAGE061
Figure 918646DEST_PATH_IMAGE062
,···
Figure 731881DEST_PATH_IMAGE063
Figure 901963DEST_PATH_IMAGE064
The total number of robots;
firstly, establishing a robot motion model and a prediction state space model:
and (3) motion model: the motion model of the robot is shown as formula (1), and in order to simplify the motion model,
Figure 14888DEST_PATH_IMAGE065
at most 8 movement directions can be selected, as shown in fig. 3, the included angle between adjacent directions is 45 degrees;
Figure 903209DEST_PATH_IMAGE067
(1)
predicting a state space model:
Figure 723398DEST_PATH_IMAGE069
is given in equation (1), selecting
Figure 810302DEST_PATH_IMAGE071
As control input to the system
Figure 424955DEST_PATH_IMAGE072
In the first place
Figure 800572DEST_PATH_IMAGE074
The state of the step is
Figure 822187DEST_PATH_IMAGE076
If so, the state equation of the subsystem of the MRS is expressed as an equation (2);
Figure 966861DEST_PATH_IMAGE078
(2)
wherein:
Figure 549152DEST_PATH_IMAGE079
the state transfer function of the MRS subsystem is determined by the formula (1);
establishing a subsystem according to equation (2)
Figure 146486DEST_PATH_IMAGE080
The prediction model of (3);
Figure 308477DEST_PATH_IMAGE082
(3)
wherein:
Figure 245341DEST_PATH_IMAGE084
Figure 201795DEST_PATH_IMAGE085
is the total number of RSs,
Figure 345813DEST_PATH_IMAGE087
is the predicted step number;
Figure 45916DEST_PATH_IMAGE088
is based on
Figure 899602DEST_PATH_IMAGE089
First, the
Figure 699062DEST_PATH_IMAGE090
Predicted by state of step and control input
Figure 333306DEST_PATH_IMAGE092
The status of the next step;
s2, firstly, establishing a grid map, and dividing the task search area into
Figure 571520DEST_PATH_IMAGE093
Grids with the same area are arranged, and then a two-dimensional biological heuristic neural network is established, wherein the structure diagram of the two-dimensional biological heuristic neural network is shown in figure 2; each grid has three states: presence of a target, absence of an obstacle and a target, presence of an obstacle; each robot detects obstacles by using a carried ultrasonic sensor and detects a target object by using an infrared sensor, acquires surrounding environment information and updates the state of a grid map;
s3, establishing a two-dimensional biological heuristic neural network based on the grid map, wherein each neuron in the two-dimensional biological heuristic neural network corresponds to a grid and has a matched neuron activity value
Figure 954747DEST_PATH_IMAGE095
(ii) a The magnitude of the neuron activity value Q depends on the external stimulus signal; first, the
Figure 49742DEST_PATH_IMAGE096
Individual neuron activity value
Figure 374544DEST_PATH_IMAGE097
Updating according to equation (4):
Figure 88553DEST_PATH_IMAGE098
wherein:
Figure 916832DEST_PATH_IMAGE100
indicating the first in the network
Figure 448307DEST_PATH_IMAGE102
The value of the activity of an individual neuron,
Figure 994826DEST_PATH_IMAGE104
is shown as
Figure 837493DEST_PATH_IMAGE105
The external stimulation signals received by the individual neurons,
Figure 457961DEST_PATH_IMAGE106
which is representative of the excitation signal, is,
Figure 98021DEST_PATH_IMAGE107
represents a suppression signal;
Figure 928574DEST_PATH_IMAGE109
Figure 46703DEST_PATH_IMAGE111
Figure 115153DEST_PATH_IMAGE112
is shown with
Figure 690946DEST_PATH_IMAGE113
Neurons adjacent to each other
Figure 212057DEST_PATH_IMAGE114
An activity value of (a);
Figure 930614DEST_PATH_IMAGE115
is the number of adjacent neurons;
Figure 853571DEST_PATH_IMAGE116
are all constant with a positive value,
Figure 835433DEST_PATH_IMAGE118
to represent
Figure 515944DEST_PATH_IMAGE119
The rate of decay of (a) is,
Figure 241455DEST_PATH_IMAGE120
and
Figure 812726DEST_PATH_IMAGE121
respectively represent
Figure 637594DEST_PATH_IMAGE122
Upper and lower limit values of, i.e.
Figure 398876DEST_PATH_IMAGE124
Representing neurons
Figure 928078DEST_PATH_IMAGE125
And adjacent neurons
Figure 560047DEST_PATH_IMAGE126
The connection weight coefficient of (5) is defined;
Figure 821396DEST_PATH_IMAGE128
(5)
wherein:
Figure 807325DEST_PATH_IMAGE129
representing vectors in state space
Figure 874638DEST_PATH_IMAGE130
And
Figure 157852DEST_PATH_IMAGE131
the Euclidean distance between;
Figure 917997DEST_PATH_IMAGE132
and
Figure 794818DEST_PATH_IMAGE133
are all positive constants, generally
Figure 665822DEST_PATH_IMAGE135
S4, combining the two-dimensional biological heuristic neural network with the state of the grid map, namely: external stimulation signal corresponding to grid neuron with target
Figure 6804DEST_PATH_IMAGE136
For exciting the signal, external stimulating signals corresponding to the grid neurons with obstacles
Figure 997238DEST_PATH_IMAGE137
To suppress the signal;
the state combination method of the two-dimensional biological heuristic neural network and the grid map is shown as the formula (6);
Figure 423672DEST_PATH_IMAGE138
wherein:
Figure 98366DEST_PATH_IMAGE139
to represent the first in a grid map
Figure 231539DEST_PATH_IMAGE140
A plurality of grids, each grid being provided with a plurality of grids,
Figure 333487DEST_PATH_IMAGE141
is a sufficiently large positive constant;
s5, initializing two-dimensional biological heuristic neural network activity value and RS movement step number
Figure 247216DEST_PATH_IMAGE142
(initially)
Figure 522340DEST_PATH_IMAGE143
) (ii) a Each RS updates the corresponding neuron activity value according to the state of the grid in the detection range; the activity value of the grid neurons with the targets is the largest, the activity value of the grid neurons with the obstacles is the smallest, and therefore the robot can move towards an area with larger activity value of the neurons as a whole; due to the limited detection capability of the robot sensor (assuming the detection range is
Figure 829126DEST_PATH_IMAGE144
A plurality of grids, each grid being provided with a plurality of grids,
Figure 101975DEST_PATH_IMAGE145
) Therefore, each robot only updates the neuron activity value within the detection range of the robot, and the grid neuron activity value outside the detection range is unchanged;
and S6, after updating, performing iterative cooperative decision among the RSs, so as to determine the grid to which each RS moves next, wherein the steps are as follows:
s6.1, determining an MRS iteration decision sequence: the first robot to make a decision is noted
Figure 175104DEST_PATH_IMAGE146
And the second robot to make a decision is recorded as
Figure 457181DEST_PATH_IMAGE147
Similarly, the last iteration of the robot is noted as
Figure 627263DEST_PATH_IMAGE148
In the present invention
Figure 867751DEST_PATH_IMAGE149
The selection is made at random from the MRS,
Figure 756073DEST_PATH_IMAGE151
respectively generating from the robots closest to the last decision maker;
S6.2,
Figure 245435DEST_PATH_IMAGE152
and (4) making a decision: the decision is made by introducing DMPC (distributed model predictive control) method, and the specific method is to predict
Figure 535602DEST_PATH_IMAGE153
(Future)
Figure 681413DEST_PATH_IMAGE154
The position state of the step is obtained based on the current two-dimensional biological heuristic neural network
Figure 791451DEST_PATH_IMAGE155
Step-by-step cumulative search performance function
Figure 87435DEST_PATH_IMAGE156
For arbitrary
Figure 966529DEST_PATH_IMAGE157
Figure 817329DEST_PATH_IMAGE158
The definition is shown as a formula (7);
Figure 414663DEST_PATH_IMAGE159
wherein:
Figure 248758DEST_PATH_IMAGE161
to represent
Figure 247938DEST_PATH_IMAGE163
Possible directions of movement, as shown in fig. 3;
Figure 1131DEST_PATH_IMAGE165
to represent
Figure 85761DEST_PATH_IMAGE166
First, the
Figure 720618DEST_PATH_IMAGE167
The state of step, i.e.
Figure 574304DEST_PATH_IMAGE169
Current position
Figure 763977DEST_PATH_IMAGE171
Figure 335904DEST_PATH_IMAGE173
Representing a single-step search efficiency function, and defining the function as shown in a formula (8);
Figure 839697DEST_PATH_IMAGE175
wherein:
Figure 485573DEST_PATH_IMAGE177
represents an incremental function of the value of neuronal activity,
Figure 783831DEST_PATH_IMAGE179
a function representing the cost of the turn is represented,
Figure 654440DEST_PATH_IMAGE181
Figure 696345DEST_PATH_IMAGE183
represents a weight coefficient with a value range of
Figure 790203DEST_PATH_IMAGE185
Figure 728203DEST_PATH_IMAGE187
The definition is shown as a formula (9);
Figure 9143DEST_PATH_IMAGE189
(9)
wherein:
Figure 854739DEST_PATH_IMAGE191
Figure 472278DEST_PATH_IMAGE193
indicating the coverage of the current position of the RS
Figure DEST_PATH_IMAGE195
A plurality of grids, each grid being provided with a plurality of grids,
Figure 253283DEST_PATH_IMAGE196
indicating the number of grids that the current position of the robot can cover.
Figure 21519DEST_PATH_IMAGE198
The definition is shown as a formula (10);
Figure DEST_PATH_IMAGE199
wherein:
Figure DEST_PATH_IMAGE201
RS maximum turn angle;
by using MATLAB (commercial math software 'matrix laboratory' from American corporation) self-contained genetic algorithm toolbox optimization solution
Figure 546172DEST_PATH_IMAGE202
Maximum, and thus predicted future
Figure 617552DEST_PATH_IMAGE204
Optimal direction of motion control input for a step
Figure 694093DEST_PATH_IMAGE206
(ii) a After that
Figure 949625DEST_PATH_IMAGE208
Duplicating the current two-dimensional bio-heuristic neural network state, based on
Figure 402603DEST_PATH_IMAGE210
Is/are as follows
Figure 325559DEST_PATH_IMAGE212
Updating the copied two-dimensional biological heuristic neural network by step prediction control input to obtain a new virtual two-dimensional biological heuristic neural network for decision making, and sending the virtual two-dimensional biological heuristic neural network to the user
Figure 573001DEST_PATH_IMAGE214
S6.3, performing iterative decision of the intermediate robot:
Figure 315829DEST_PATH_IMAGE216
receive to
Figure 306919DEST_PATH_IMAGE218
After the sent virtual two-dimensional biological heuristic neural network, similar to S6.2, the self QUOTE is solved based on the optimization of the two-dimensional virtual biological heuristic neural network
Figure DEST_PATH_IMAGE219
Figure 19135DEST_PATH_IMAGE219
Optimal direction of motion control input for a step
Figure DEST_PATH_IMAGE221
And on their own
Figure DEST_PATH_IMAGE223
Updating the received virtual two-dimensional biological inspiring neural network by the step prediction control input to obtain a new virtual two-dimensional biological inspiring neural network, and sending the new virtual two-dimensional biological inspiring neural network to the user
Figure DEST_PATH_IMAGE225
Iterate until
Figure DEST_PATH_IMAGE227
Finishing the decision;
s6.4, updating the state of the two-dimensional biological heuristic neural network: each RS is based on the solved
Figure DEST_PATH_IMAGE229
First step of step predictive motion control input (
Figure DEST_PATH_IMAGE231
) In the first place
Figure DEST_PATH_IMAGE233
Moving to the next grid, and updating the two-dimensional biological heuristic neural network;
s6.5, if the area coverage rate or the maximum movement of the robotNumber of steps
Figure 805520DEST_PATH_IMAGE234
If the set threshold value is not reached, the step S6.1 is returned, otherwise, the MRS searching process is ended.
One specific example is given below:
as shown in FIG. 4, the experimental region was set to 20-by-20 grids and the number of RSs
Figure DEST_PATH_IMAGE235
Detection range thereof
Figure 707748DEST_PATH_IMAGE236
The initial value of the neuron activity value is 0.4,
Figure DEST_PATH_IMAGE237
Figure 312649DEST_PATH_IMAGE238
Figure DEST_PATH_IMAGE239
Figure 226509DEST_PATH_IMAGE240
,
Figure DEST_PATH_IMAGE241
selecting a group of MRS starting positions, enabling the MRS starting positions to move for 45 steps, wherein the movement track is shown in figure 4, a black grid represents an obstacle, a white grid represents a searched area, a gray grid represents an unsearched area, a circle represents an RS starting point, and a pentagon represents the RS current position.
As can be seen from FIG. 4, the method provided by the invention enables a plurality of robots to effectively explore unknown areas, and has less repeated trajectories among RSs and strong cooperation performance.
To further embody the superiority of the present invention, the method provided by the present invention is compared with a method for planning a search path by adopting a gradient decreasing principle:
the two methods respectively carry out 100 Monte Carlo experiments under the same experimental conditions, and the average coverage rate of the area in the 100 experiments is counted, and the curve is shown in figure 5.

Claims (3)

1. A multi-robot collaborative search method based on biological elicitation in an unknown environment is characterized by comprising the following steps: the method comprises the following steps:
s1, regarding the whole multi-robot as a system and marking as MRS; each robot is regarded as a subsystem and is marked as RS;
s2, firstly, establishing a grid map, and dividing the task search area into
Figure DEST_PATH_IMAGE001
Grids of the same area, each of said grids having three states: presence of a target, absence of an obstacle and a target, presence of an obstacle; each robot acquires the surrounding environment information by using a carried sensor and updates the state of the grid map;
s3, establishing a two-dimensional biological enlightening neural network based on the grid map, wherein each neuron in the two-dimensional biological enlightening neural network corresponds to a grid and has a matched neuron activity value
Figure 988289DEST_PATH_IMAGE002
(ii) a The magnitude of the neuron activity value Q depends on the external stimulus signal
Figure DEST_PATH_IMAGE003
S4, combining the two-dimensional biological heuristic neural network with the state of the grid map, namely: external stimulation signals corresponding to grid neurons in which the target exists
Figure DEST_PATH_IMAGE005
For the excitation signal, the external stimulation signal corresponding to the grid neuron with the obstacle exists
Figure DEST_PATH_IMAGE007
To suppress the signal;
s5, initializing neuron activity value
Figure DEST_PATH_IMAGE009
And the number of RS exercise steps
Figure DEST_PATH_IMAGE011
Each RS updates the corresponding neuron activity value according to the state of each grid in the detection range of each RS
Figure DEST_PATH_IMAGE013
The activity value of the grid neurons outside the detection range is unchanged;
and S6, after the updating is finished, carrying out iterative cooperative decision among the RSs, and thus determining the grid to which each RS moves next.
2. The multi-robot collaborative search method based on biological heuristic in the unknown environment as claimed in claim 1, wherein: in S6, the step of determining which grid each RS moves to next is:
s6.1, determining the MRS iteration decision sequence: the first robot to make a decision is noted
Figure DEST_PATH_IMAGE015
And the second robot to make a decision is recorded as
Figure DEST_PATH_IMAGE017
Similarly, the last iteration of the robot is noted as
Figure DEST_PATH_IMAGE019
S6.2, the
Figure DEST_PATH_IMAGE021
And (4) making a decision: the DMPC method is introduced for decision making, and specifically comprises the following steps:by prediction
Figure DEST_PATH_IMAGE023
(Future)
Figure 515828DEST_PATH_IMAGE024
The position state of the step, and the two-dimensional biological heuristic neural network is obtained based on the current two-dimensional biological heuristic neural network
Figure 645458DEST_PATH_IMAGE026
Step-by-step cumulative search performance function
Figure 833732DEST_PATH_IMAGE028
Using MATLAB self-contained genetic algorithm toolbox optimization solution to enable the search efficiency function
Figure 265982DEST_PATH_IMAGE030
Maximum, and thus predicted future
Figure 736994DEST_PATH_IMAGE032
Optimal direction of motion control input for a step
Figure 545681DEST_PATH_IMAGE034
(ii) a After that time, the user can use the device,
Figure 932669DEST_PATH_IMAGE036
copying the current two-dimensional biological heuristic neural network state and according to the future
Figure 34355DEST_PATH_IMAGE038
Updating the copied two-dimensional biological heuristic neural network by the control input of step prediction to obtain a virtual two-dimensional biological heuristic neural network for decision making, and sending the virtual two-dimensional biological heuristic neural network to the control input of step prediction
Figure 398472DEST_PATH_IMAGE040
(ii) a Wherein:
Figure 509385DEST_PATH_IMAGE042
To represent
Figure 642557DEST_PATH_IMAGE023
The control input of the current k-th step,
Figure 311217DEST_PATH_IMAGE044
to represent
Figure 739793DEST_PATH_IMAGE046
First, the
Figure 93545DEST_PATH_IMAGE048
The control input of the step(s) is,
Figure 110917DEST_PATH_IMAGE050
to represent
Figure 180504DEST_PATH_IMAGE052
First, the
Figure 752169DEST_PATH_IMAGE054
Step control input;
s6.3, performing iterative decision of the intermediate robot:
the above-mentioned
Figure 440770DEST_PATH_IMAGE056
Receive to
Figure 385418DEST_PATH_IMAGE058
After the sent virtual two-dimensional biological heuristic neural network, the self is solved based on the optimization of the virtual biological heuristic neural network
Figure 530967DEST_PATH_IMAGE060
Optimal direction of motion control input for a step
Figure 91392DEST_PATH_IMAGE062
And on their own
Figure 347799DEST_PATH_IMAGE064
Step prediction control input, updating the received virtual two-dimensional biological heuristic neural network to obtain a new virtual two-dimensional biological heuristic neural network, and sending the new virtual two-dimensional biological heuristic neural network to the user
Figure 575649DEST_PATH_IMAGE066
(ii) a Iterate until
Figure 751153DEST_PATH_IMAGE068
Finishing the decision; wherein:
Figure 64454DEST_PATH_IMAGE070
to represent
Figure 5777DEST_PATH_IMAGE072
The control input of the current k-th step,
Figure 353713DEST_PATH_IMAGE074
to represent
Figure 434539DEST_PATH_IMAGE076
First, the
Figure 969557DEST_PATH_IMAGE078
The control input of the step(s) is,
Figure 302187DEST_PATH_IMAGE080
to represent
Figure 239050DEST_PATH_IMAGE082
First, the
Figure 897302DEST_PATH_IMAGE084
Step control input;
s6.4, two-dimensional growthAnd (3) object-inspired neural network state updating: each RS is based on the solved
Figure 45516DEST_PATH_IMAGE086
First step of step predictive motion control input
Figure 948881DEST_PATH_IMAGE088
Move to
Figure 769945DEST_PATH_IMAGE090
Step one, updating a two-dimensional biological heuristic neural network;
s6.5 if area coverage or RS maximum number of movement steps
Figure 225197DEST_PATH_IMAGE092
If the set threshold value is not reached, returning to S6.1, otherwise, ending the MRS searching process.
3. The multi-robot collaborative search method based on biological heuristic in the unknown environment according to claim 1 or 2, characterized in that: in S2, each robot detects an obstacle using the ultrasonic sensor carried by the robot, and detects a target object using the infrared sensor.
CN202110564769.6A 2021-05-24 2021-05-24 Multi-robot collaborative search method based on biological elicitation in unknown environment Active CN113110517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110564769.6A CN113110517B (en) 2021-05-24 2021-05-24 Multi-robot collaborative search method based on biological elicitation in unknown environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110564769.6A CN113110517B (en) 2021-05-24 2021-05-24 Multi-robot collaborative search method based on biological elicitation in unknown environment

Publications (2)

Publication Number Publication Date
CN113110517A true CN113110517A (en) 2021-07-13
CN113110517B CN113110517B (en) 2022-11-29

Family

ID=76723364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110564769.6A Active CN113110517B (en) 2021-05-24 2021-05-24 Multi-robot collaborative search method based on biological elicitation in unknown environment

Country Status (1)

Country Link
CN (1) CN113110517B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467483A (en) * 2021-08-23 2021-10-01 中国人民解放军国防科技大学 Local path planning method and device based on space-time grid map in dynamic environment
CN114578827A (en) * 2022-03-22 2022-06-03 北京理工大学 Distributed multi-agent cooperative full coverage path planning method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6408226B1 (en) * 2001-04-24 2002-06-18 Sandia Corporation Cooperative system and method using mobile robots for testing a cooperative search controller
CN102521653A (en) * 2011-11-23 2012-06-27 河海大学常州校区 Biostimulation neural network device and method for jointly rescuing by multiple underground robots
CN106843216A (en) * 2017-02-15 2017-06-13 北京大学深圳研究生院 A kind of complete traverse path planing method of biological excitation robot based on backtracking search
WO2017139516A1 (en) * 2016-02-10 2017-08-17 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
CN108037771A (en) * 2017-12-07 2018-05-15 淮阴师范学院 A kind of more autonomous underwater robot search control systems and its method
CN108846384A (en) * 2018-07-09 2018-11-20 北京邮电大学 Merge the multitask coordinated recognition methods and system of video-aware
CN110769436A (en) * 2018-07-26 2020-02-07 深圳市白麓嵩天科技有限责任公司 Wireless communication anti-interference decision-making method based on mutation search artificial bee colony algorithm
CN111290398A (en) * 2020-03-13 2020-06-16 东南大学 Unmanned ship path planning method based on biological heuristic neural network and reinforcement learning
CN111337931A (en) * 2020-03-19 2020-06-26 哈尔滨工程大学 AUV target searching method
CN111487986A (en) * 2020-05-15 2020-08-04 中国海洋大学 Underwater robot cooperative target searching method based on global information transfer mechanism
CN112465127A (en) * 2020-11-29 2021-03-09 西北工业大学 Multi-agent cooperative target searching method based on improved biological heuristic neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6408226B1 (en) * 2001-04-24 2002-06-18 Sandia Corporation Cooperative system and method using mobile robots for testing a cooperative search controller
CN102521653A (en) * 2011-11-23 2012-06-27 河海大学常州校区 Biostimulation neural network device and method for jointly rescuing by multiple underground robots
WO2017139516A1 (en) * 2016-02-10 2017-08-17 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
CN106843216A (en) * 2017-02-15 2017-06-13 北京大学深圳研究生院 A kind of complete traverse path planing method of biological excitation robot based on backtracking search
CN108037771A (en) * 2017-12-07 2018-05-15 淮阴师范学院 A kind of more autonomous underwater robot search control systems and its method
CN108846384A (en) * 2018-07-09 2018-11-20 北京邮电大学 Merge the multitask coordinated recognition methods and system of video-aware
CN110769436A (en) * 2018-07-26 2020-02-07 深圳市白麓嵩天科技有限责任公司 Wireless communication anti-interference decision-making method based on mutation search artificial bee colony algorithm
CN111290398A (en) * 2020-03-13 2020-06-16 东南大学 Unmanned ship path planning method based on biological heuristic neural network and reinforcement learning
CN111337931A (en) * 2020-03-19 2020-06-26 哈尔滨工程大学 AUV target searching method
CN111487986A (en) * 2020-05-15 2020-08-04 中国海洋大学 Underwater robot cooperative target searching method based on global information transfer mechanism
CN112465127A (en) * 2020-11-29 2021-03-09 西北工业大学 Multi-agent cooperative target searching method based on improved biological heuristic neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHANG FANGFANG: "Multi-robot Rounding Strategy Based on Artificial Potential Field Method in Dynamic Environment", 《2019 CHINESE AUTOMATION CONGRESS (CAC)》 *
李俊涛等: "基于多优化策略RRT的无人机实时航线规划", 《火力与指挥控制》 *
祁晓明等: "面向目标不确定的多无人机鲁棒协同搜索", 《系统工程与电子技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467483A (en) * 2021-08-23 2021-10-01 中国人民解放军国防科技大学 Local path planning method and device based on space-time grid map in dynamic environment
CN113467483B (en) * 2021-08-23 2022-07-26 中国人民解放军国防科技大学 Local path planning method and device based on space-time grid map in dynamic environment
CN114578827A (en) * 2022-03-22 2022-06-03 北京理工大学 Distributed multi-agent cooperative full coverage path planning method

Also Published As

Publication number Publication date
CN113110517B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
Charrow et al. Information-Theoretic Planning with Trajectory Optimization for Dense 3D Mapping.
CN113110517B (en) Multi-robot collaborative search method based on biological elicitation in unknown environment
Chatterjee et al. A Geese PSO tuned fuzzy supervisor for EKF based solutions of simultaneous localization and mapping (SLAM) problems in mobile robots
Hewawasam et al. Past, present and future of path-planning algorithms for mobile robot navigation in dynamic environments
CN113485371B (en) Underwater multi-AUV path planning method based on improved sparrow search algorithm
CN111432015A (en) Dynamic noise environment-oriented full-coverage task allocation method
Polycarpou et al. Cooperative control of distributed multi-agent systems
Woodford et al. Concurrent controller and simulator neural network development for a differentially-steered robot in evolutionary robotics
Biswas et al. A particle swarm optimization based path planning method for autonomous systems in unknown terrain
Alanezi et al. Dynamic target search using multi-UAVs based on motion-encoded genetic algorithm with multiple parents
Li et al. A mixing algorithm of ACO and ABC for solving path planning of mobile robot
Niu et al. An improved sand cat swarm optimization for moving target search by UAV
Chen et al. A multirobot distributed collaborative region coverage search algorithm based on Glasius bio-inspired neural network
Zhang et al. PSO-based sparse source location in large-scale environments with a uav swarm
Kumar et al. A novel hybrid framework for single and multi-robot path planning in a complex industrial environment
Li et al. Multi-mode filter target tracking method for mobile robot using multi-agent reinforcement learning
He et al. Multiobjective coordinated search algorithm for swarm of UAVs based on 3D-simplified virtual forced model
Cheng et al. Robot path planning based on an improved salp swarm algorithm
Chaudhary et al. Obstacle avoidance of a point-mass robot using feedforward neural network
Panigrahi et al. Comparison of GSA, SA and PSO based intelligent controllers for path planning of mobile robot in unknown environment
Meng et al. Self-adaptive distributed multi-task allocation in a multi-robot system
Yu et al. A study on online hyper-heuristic learning for swarm robots
Loganathan et al. Robot path planning via Harris hawks optimization: A comparative assessment
Thangavelautham et al. Evolving a scalable multirobot controller using an artificial neural tissue paradigm
Jha Intelligent Control and Path Planning of Multiple Mobile Robots Using Hybrid Ai Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant