CN102915465B - Multi-robot combined team-organizing method based on mobile biostimulation nerve network - Google Patents

Multi-robot combined team-organizing method based on mobile biostimulation nerve network Download PDF

Info

Publication number
CN102915465B
CN102915465B CN201210408924.6A CN201210408924A CN102915465B CN 102915465 B CN102915465 B CN 102915465B CN 201210408924 A CN201210408924 A CN 201210408924A CN 102915465 B CN102915465 B CN 102915465B
Authority
CN
China
Prior art keywords
robot
target
biostimulation
formation
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210408924.6A
Other languages
Chinese (zh)
Other versions
CN102915465A (en
Inventor
倪建军
仰晓芳
王楚
吴文波
李新云
殷霞红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201210408924.6A priority Critical patent/CN102915465B/en
Publication of CN102915465A publication Critical patent/CN102915465A/en
Application granted granted Critical
Publication of CN102915465B publication Critical patent/CN102915465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Feedback Control In General (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a multi-robot combined team-organizing method based on mobile biostimulation nerve network. The method comprises the following steps: building respective real-time map by each robot by combining virtual target position information sent by a leading robot and utilizing a mobile biostimulation nerve network model according to the information probed by a motion detection camera, an ultrasonic sensor and a laser ranging device and environment information sent by other robots; calculating an optimal path to adjust the positions of the robots; and moving towards an actual target when a required queue is maintained. By adopting the multi-robot combined team-organizing method, the distribution of a team-organizing task is performed by self-organized mapping nerve network, the maps are built in real time by using the mobile biostimulation nerve network to allow the robots to pilot; and the method has an important theory and a practical application value on multi-robot combined queue organization, multi-robot combined rescue and the like.

Description

A kind of multirobot based on mobile biostimulation neural network combines formation method
Technical field
The present invention relates to a kind of multirobot based on mobile biostimulation neural network and combine formation method, belonging to multi-robot Cooperation control technology field, is the application that artificial intelligence combines with Robotics.
Background technology
Being of wide application of multi-robot formation, outstanding contribution can be made in sides such as military affairs, Aero-Space, detection, disaster process, and the research that multi-robot formation controls is one of important content of multi-robot Cooperation research, and its research has most important theories and application value realistic.
Summary of the invention
Goal of the invention: in order to overcome the deficiencies in the prior art, the invention provides a kind of multirobot based on mobile biostimulation neural network and combines formation method, comprise the following steps:
Step (1): each robot in multi-robot system is counted as an intelligent body, each robot carries dynamic video camera, ultrasonic sensor, laser range finder and the wireless telecommunications system of detecting and carries out work;
Step (2): intelligent robot obtains the image information in environment in real time by dynamic detection video camera, ultrasonic sensor is for the target in perception environment and barrier, and the position of target and barrier is determined by laser range finder, and innervation is detected environmental information that video camera, ultrasonic sensor and laser range finder record and be converted to broadcast message by wireless telecommunications system and pass to other robot, realize information sharing;
Step (3): when task starts, first in multi-robot system, determine that one for pilot robot at random, pilot robot broadcasts according to actual target locations information in task and other robot the environmental information obtained, in conjunction with self the dynamic information detecting video camera, ultrasonic sensor and laser range finder and detect, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving actual target locations, then utilize biostimulation neural network to navigate;
Step (4): pilot robot is while motion, according to the real-time position information of self, utilize the formation model based on leader-referenced algorithm, in keeping rank required by calculating, each follows the assigned address that robot should arrive, and it can be used as each to follow the virtual target of robot;
Step (5): pilot robot is according to respectively following the current actual position information of robot, in real time each virtual target calculated above is distributed respectively following in robot by SOM self organizing neural network algorithm, then by wireless telecommunications this virtual target position sent to and corresponding follow robot;
Step (6): the environmental information of respectively following virtual target positional information that robot sends in real time according to pilot robot and other robot broadcast, and detect according to self dynamic video camera, ultrasonic sensor and laser range finder of detecting the information obtained, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving its corresponding virtual target, and navigate, thus to realize and pilot robot and other robot keep the formation that requires; In order to prevent colliding with each other between robot, other robot is regarded barrier process here; Namely can realize whole robot team formation with the various formations required, and best navigation arrives target.In motion process, if task object and required formation change, only actual target locations and corresponding formation model need be adjusted in pilot robot.
In described step (2), broadcast message specifically refers to, content and the form of broadcast message are as follows:
A={x,y,z,flag}
Wherein, (x, y, z) represents the three-dimensional location coordinates in environment; Flag represents the zone bit of its corresponding states, and the content of its correspondence is:
Formation model based on leader-referenced algorithm in described step (4) specifically refers to:
(3a): if definition R 0for pilot robot, its actual target location coordinate is (x 0, y 0, z 0), R ibe i-th and follow robot, then its virtual target position coordinates is (x i, y i, z i), its computing formula is different according to the difference of formation task, and the formation of such as forming into columns is conplane linear formation, then the virtual target position calculation function of respectively following robot is:
Wherein, α is the inclination angle of formation, α=0 during linear formation; γ is the distance between robot;
(3b): in forming into columns based on leader-referenced, the relative distance of respectively following between the coordinate position of virtual target coordinate position according to pilot robot of robot, the angle of formation, robot is determined; The task of pilot robot is constantly constantly moved towards the realistic objective of term of reference, and follow robot by obtaining the information of virtual target from pilot robot, constantly close to virtual target, thus while realizing overall flight pattern, move towards realistic objective.
In described step (5), SOM self organizing neural network algorithm specifically refers to:
(4a): SOM self organizing neural network algorithm is divided into two-layer: input layer is the position of target, output layer comprises the coordinate of robot and arrives the path planning of target;
(4b): the computing formula based on SOM self organizing neural network algorithm is as follows:
[ N k N m ] ⇐ min { D ikm , i = 1 , . . . , M ; k = 1 , . . . , K ; m = 1 , . . . , M ; and { k , m } ∈ Ω }
Wherein, [N k, N m] represent that kGe robot has been assigned to m target; D ikmfor Weighted distance function; K represents the number of robot; M represents the number of target; Ω represents the set of target and the robot be not assigned with; Here [N k, N m] be exactly according to D ikmthe value obtained time minimum; D ikmcomputing formula as follows:
D ikm=|T i-R km|(1+P)
Wherein, euclidean distance between expression task and robot; R km=(w kmx, w kmy), k=1 ..., K; M=1 ..., M represents the initial coordinate position of kGe robot; P is used to ensure that the workload of each robot is mean allocation, and its formula is:
P = L k - V 1 + V
Wherein, L krepresent the path of kGe robot to target; V represents that robot arrives the average path length of target;
(4c): the right value update formula of SOM self organizing neural network is:
R km(t+1)=R km(t)+h i(t)(T i(t)-R km(t))
Wherein, h it () is neighborhood function, calculated by triumph neuron i and other neuronic distances, can obtain a contiguous range of triumph neuron i; By continuous iteration and renewal, final realize target position is distributed respectively following the automatic optimal in robot, and it is minimum that this algorithm not only considers robot range-to-go, and consider whole troop and have minimum workload generally.
Mobile biostimulation Establishment of Neural Model real-time map is utilized to refer in described step (3) and step (6):
(5a): the image first innervation being detected video camera acquisition processes, and obtains environmental information at this very moment, according to the scope of detection, builds a neural network; According to the decipherment distance of detection instrument, by this environment space discretize, wherein each discrete point (neuron) is 4 dimension spaces, by (x, y, z, s) form, (x, y, z) be the geographical position coordinates of this discrete point, s is the neuronic activity value of biostimulation neural network, is calculated by following formula:
ds i dt = - As i + ( B - s i ) ( [ I i ] + + Σ j = 1 k w ij [ s j ] + ) - ( D + s i ) [ I i ] -
Wherein, s irepresent i-th neuronic activity value, [s j] +represent that jth the neuron adjacent with this neuron is to its excitation, k represents the neuron number having with this neuron and be connected, w ijrepresent and connect weights, [I i] +[I i] -represent the threshold function table solving pungency input and inhibition input respectively; A and B is constant;
(5b): the pungency input in biostimulation neural network model and inhibition input [I i] +[I i] -come from the barrier in the target of formation and environment respectively, its computing formula is as follows:
Wherein, E is a constant and is far longer than constant B;
(5c): calculate each neuronic dynamic activity value according to biostimulation neural network model, can ensure in the place having barrier or other robot, neuronic dynamic activity value is minimum, and in the position of target, neuronic dynamic activity value is maximum, such robot can calculate best formation path in real time according to the size of each neuronic dynamic activity value, and navigates, and concrete navigation procedure is as follows:
r) t+1=angle(p r,p n)
p n ⇐ s p n = max { s j , j = 1,2 , . . . , k }
Wherein, (θ r) t+1for the deflection of next step action of robot, angle (p r, p n) be calculating robot current location p rwith neuron p npoint-to-point transmission angle formulae, and p nfor the maximum of dynamic activity value in neurons all within the scope of robot probe;
(5d): along with the motion of robot, robot probe to environmental information change in the moment, according to the information of real-time change, continuous mobile biostimulation neural network model, rebuild environmental map, according to this thought, the movement locus of robot will be one and automatically can get around barrier, and can not bump against with other robot, the optimal path of required formation position can be arrived again fast.
Beneficial effect: the multirobot based on mobile biostimulation neural network provided by the invention combines formation method, can improve formation efficiency, and can perception environment in real time, cooperation builds map, and tool has the following advantages:
(1), the present invention utilize robot to carry various kinds of sensors to obtain the real-time information of environment, and by wireless telecommunications, carry out multi-robot Cooperation and build map, can more effectively for multi-robot formation provides accurate environmental information;
(2), the present invention utilizes SOM self organizing neural network to distribute formation task for multirobot, and this algorithm not only reduces the path cost of robot, and reduces the workload of multi-robot system entirety, improves work efficiency;
(3), the present invention proposes to utilize a kind of method of mobile biostimulation neural network to combine formation to carry out real-time multirobot, both multirobot joint mapping map can automatically be realized, can independent navigation be realized again, thus greatly improve robot team formation efficiency;
(4), the present invention when calculating each robot optimal path, all the robot beyond oneself is processed as barrier, this avoid the mutual collision in formation process between robot.
Accompanying drawing explanation
Fig. 1 is hardware device block diagram of the present invention;
Fig. 2 is that in the present invention, multirobot combines formation method flow diagram;
Fig. 3 is the formation task matching process flow diagram based on SOM algorithm in the present invention;
Fig. 4 is mobile biostimulation neural network algorithm process flow diagram in the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further described.
As shown in Figure 1, for implementing hardware device block diagram of the present invention, comprise robot 1, dynamic detection video camera 2, laser range finder 3, ultrasonic sensor 4, wireless telecommunication system 5, memory device 6 and decision system 7, wherein dynamic detection video camera 2, laser range finder 3, ultrasonic sensor 4, wireless telecommunication system 5, memory device 6 and decision system 7 are installed in above robot 1, robot 1 gathers realtime graphic by video camera 2 and is transferred to decision system 7, and utilize ultrasonic sensor 4 to carry out the detection of barrier, utilize wireless telecommunication system 5 will send to companion robot for information about, receive the information from companion robot simultaneously.Robot 1 utilizes memory device 6 pairs of Obstacle Positions and finds that the position of target stores.Robot 1 carries out decision-making by decision system 7.
As shown in Figure 2, for a kind of multirobot based on mobile biostimulation neural network combines formation method, comprise the following steps:
Step (1): each robot in multi-robot system is counted as an intelligent body, each robot carries dynamic video camera, ultrasonic sensor, laser range finder and the wireless telecommunications system of detecting and carries out work;
Step (2): intelligent robot obtains the image information in environment in real time by dynamic detection video camera, ultrasonic sensor is for the target in perception environment and barrier, and the position of target and barrier is determined by laser range finder, and innervation is detected environmental information that video camera, ultrasonic sensor and laser range finder record and be converted to broadcast message by wireless telecommunications system and pass to other robot, realize information sharing;
Step (3): when task starts, first in multi-robot system, determine that one for pilot robot at random, pilot robot broadcasts according to actual target locations information in task and other robot the environmental information obtained, in conjunction with self the dynamic information detecting video camera, ultrasonic sensor and laser range finder and detect, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving actual target locations, then utilize biostimulation neural network to navigate;
Step (4): pilot robot is while motion, according to the real-time position information of self, utilize the formation model based on leader-referenced algorithm, in keeping rank required by calculating, each follows the assigned address that robot should arrive, and it can be used as each to follow the virtual target of robot;
Step (5): pilot robot is according to respectively following the current actual position information of robot, in real time each virtual target calculated above is distributed respectively following in robot by SOM self organizing neural network algorithm, then by wireless telecommunications this virtual target position sent to and corresponding follow robot;
Step (6): the environmental information of respectively following virtual target positional information that robot sends in real time according to pilot robot and other robot broadcast, and detect according to self dynamic video camera, ultrasonic sensor and laser range finder of detecting the information obtained, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving its corresponding virtual target, and navigate, thus to realize and pilot robot and other robot keep the formation that requires; In order to prevent colliding with each other between robot, other robot is regarded barrier process here; Namely can realize whole robot team formation with the various formations required, and best navigation arrives target; In motion process, if task object and required formation change, only actual target locations and corresponding formation model need be adjusted in pilot robot.
In described step (2), broadcast message specifically refers to, content and the form of broadcast message are as follows:
A={x,y,z,flag}
Wherein, (x, y, z) represents the three-dimensional location coordinates in environment; Flag represents the zone bit of its corresponding states, and the content of its correspondence is:
Formation model based on leader-referenced algorithm in described step (4) specifically refers to:
(3a): if definition R 0for pilot robot, its actual target location coordinate is (x 0, y 0, z 0), R ibe i-th and follow robot, then its virtual target position coordinates is (x i, y i, z i), its computing formula is different according to the difference of formation task, and the formation of such as forming into columns is conplane linear formation, then the virtual target position calculation function of respectively following robot is:
Wherein, α is the inclination angle of formation, α=0 during linear formation; γ is the distance between robot;
(3b): in forming into columns based on leader-referenced, the relative distance of respectively following between the coordinate position of virtual target coordinate position according to pilot robot of robot, the angle of formation, robot is determined; The task of pilot robot is constantly constantly moved towards the realistic objective of term of reference, and follow robot by obtaining the information of virtual target from pilot robot, constantly close to virtual target, thus while realizing overall flight pattern, move towards realistic objective.
In described step (5), SOM self organizing neural network algorithm specifically refers to:
(4a): SOM self organizing neural network algorithm is divided into two-layer: input layer is the position of target, output layer comprises the coordinate of robot and arrives the path planning of target;
(4b): the computing formula based on SOM self organizing neural network algorithm is as follows:
[ N k N m ] ⇐ min { D ikm , i = 1 , . . . , M ; k = 1 , . . . , K ; m = 1 , . . . , M ; and { k , m } ∈ Ω }
Wherein, [N k, N m] represent that kGe robot has been assigned to m target; D ikmfor Weighted distance function; K represents the number of robot; M represents the number of target; Ω represents the set of target and the robot be not assigned with; Here [N k, N m] be exactly according to D ikmthe value obtained time minimum; D ikmcomputing formula as follows:
D ikm=|T i-R km|(1+P)
Wherein, euclidean distance between expression task and robot; R km=(w kmx, w kmy), k=1 ..., K; M=1 ..., M represents the initial coordinate position of kGe robot; P is used to ensure that the workload of each robot is mean allocation, and its formula is:
P = L k - V 1 + V
Wherein, L krepresent the path of kGe robot to target; V represents that robot arrives the average path length of target;
(4c): the right value update formula of SOM self organizing neural network is:
R km(t+1)=R km(t)+h i(t)(T i(t)-R km(t))
Wherein, h it () is neighborhood function, calculated by triumph neuron i and other neuronic distances, can obtain a contiguous range of triumph neuron i; By continuous iteration and renewal, final realize target position is distributed respectively following the automatic optimal in robot, and it is minimum that this algorithm not only considers robot range-to-go, and consider whole troop and have minimum workload generally.
As shown in Figure 3, be the formation task matching process flow diagram based on SOM self organizing neural network algorithm, specifically comprise:
(6a) parameter initialization;
(6b) by target location T ibe input in system;
(6c) minimum weight distance D is calculated ikm, adopt SOM self organizing neural network to distribute formation task;
(6d) all task matching are complete, terminate, otherwise return (6b).
Mobile biostimulation Establishment of Neural Model real-time map is utilized to refer in described step (3) and step (6):
(5a): the image first innervation being detected video camera acquisition processes, and obtains environmental information at this very moment, according to the scope of detection, builds a neural network; According to the decipherment distance of detection instrument, by this environment space discretize, wherein each discrete point (neuron) is 4 dimension spaces, by (x, y, z, s) form, (x, y, z) be the geographical position coordinates of this discrete point, s is the neuronic activity value of biostimulation neural network, is calculated by following formula:
ds i dt = - As i + ( B - s i ) ( [ I i ] + + Σ j = 1 k w ij [ s j ] + ) - ( D + s i ) [ I i ] -
Wherein, s irepresent i-th neuronic activity value, [s j] +represent that jth the neuron adjacent with this neuron is to its excitation, k represents the neuron number having with this neuron and be connected, w ijrepresent and connect weights, [I i] +[I i] -represent the threshold function table solving pungency input and inhibition input respectively; A and B is constant;
(5b): the pungency input in biostimulation neural network model and inhibition input [I i] +[I i] -come from the barrier in the target of formation and environment respectively, its computing formula is as follows:
Wherein, E is a constant and is far longer than constant B;
(5c): calculate each neuronic dynamic activity value according to biostimulation neural network model, can ensure in the place having barrier or other robot, neuronic dynamic activity value minimum (negative value), and in the position of target, neuronic dynamic activity value maximum (honest), such robot can calculate best formation path in real time according to the size of each neuronic dynamic activity value, and navigates, and concrete navigation procedure is as follows:
r) t+1=angle(p r,p n)
P n ⇐ s p n = max { s j , j = 1,2 , . . . , k }
Wherein, (θ r) t+1for the deflection of next step action of robot, angle (p r, p n) be calculating robot current location p rwith neuron p npoint-to-point transmission angle formulae, and p nfor the maximum of dynamic activity value in neurons all within the scope of robot probe;
(5d): along with the motion of robot, robot probe to environmental information change in the moment, according to the information of real-time change, continuous mobile biostimulation neural network model, rebuild environmental map, according to this thought, the movement locus of robot will be one and automatically can get around barrier, and can not bump against with other robot, the optimal path of required formation position can be arrived again fast.
As shown in Figure 4, for biostimulation neural network mobile in described step (3) and step (6) builds the process flow diagram of real-time map, specifically comprise:
(7a) model parameter initialization;
(7b) according to biostimulation neural network activity value computing formula, all known neuron dynamic activity values are upgraded;
(7c) robot is worth maximum neuronal motor towards known activity.Robot can the coordinate of every bit in real time computing environment by dynamic video camera and the laser range finder of detecting, thus produces new neuron;
If (7d) find target in robot kinematics, calculating target is all neuronic distances that can detect from around, and upgrade these neuronic activity values;
If (7e) find barrier in robot kinematics, dyscalculia thing is all neuronic distances that can detect from around, and upgrade these neuronic activity values;
(7f) task completes, and terminates, otherwise turns back to (7a) and repeat.
In the present invention combines formation at multirobot, multirobot combines search and rescue etc., there is most important theories and application value realistic.
The above is only the preferred embodiment of the present invention; be noted that for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (4)

1. the multirobot based on mobile biostimulation neural network combines a formation method, comprises the steps:
Step (1): each robot in multi-robot system is counted as an intelligent body, each robot carries dynamic video camera, ultrasonic sensor, laser range finder and the wireless telecommunications system of detecting and carries out work;
Step (2): intelligent robot obtains the image information in environment in real time by dynamic detection video camera, ultrasonic sensor is for the target in perception environment and barrier, and the position of target and barrier is determined by laser range finder, and innervation is detected environmental information that video camera, ultrasonic sensor and laser range finder record and be converted to broadcast message by wireless telecommunications system and pass to other robot, realize information sharing;
Step (3): when task starts, first in multi-robot system, determine that one for pilot robot at random, pilot robot broadcasts according to actual target locations information in task and other robot the environmental information obtained, in conjunction with self the dynamic information detecting video camera, ultrasonic sensor and laser range finder and detect, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving actual target locations, then utilize biostimulation neural network to navigate;
Step (4): pilot robot is while motion, according to the real-time position information of self, utilize the formation model based on leader-referenced algorithm, in keeping rank required by calculating, each follows the assigned address that robot should arrive, and it can be used as each to follow the virtual target of robot;
Step (5): pilot robot is according to respectively following the current actual position information of robot, in real time each virtual target calculated above is distributed respectively following in robot by SOM self organizing neural network algorithm, then by wireless telecommunications this virtual target position sent to and corresponding follow robot;
Step (6): the environmental information of respectively following virtual target positional information that robot sends in real time according to pilot robot and other robot broadcast, and detect according to self dynamic video camera, ultrasonic sensor and laser range finder of detecting the information obtained, utilize mobile biostimulation Establishment of Neural Model real-time map, calculate the optimal path arriving its corresponding virtual target, and navigate, thus to realize and pilot robot and other robot keep the formation that requires; In order to prevent colliding with each other between robot, other robot is regarded barrier process here; Namely can realize whole robot team formation with the various formations required, and best navigation arrives target;
Mobile biostimulation Establishment of Neural Model real-time map is utilized to refer in described step (3) and step (6):
(5a): the image first innervation being detected video camera acquisition processes, and obtains environmental information at this very moment, according to the scope of detection, builds a neural network; According to the decipherment distance of detection instrument, by this environment space discretize, wherein each discrete point and neuron are 4 dimension spaces, by (x, y, z, s) form, (x, y, z) be the geographical position coordinates of this discrete point, s is the neuronic activity value of biostimulation neural network, is calculated by following formula:
ds i dt = - A s i + ( B - s i ) ( [ I i ] + + Σ j = 1 k w ij [ s j ] + ) - ( D + s i ) [ I i ] -
Wherein, s irepresent i-th neuronic activity value, [s j] +represent that jth the neuron adjacent with this neuron is to its excitation, k represents the neuron number having with this neuron and be connected, w ijrepresent and connect weights, [I i] +[I i] -represent the threshold function table solving pungency input and inhibition input respectively; A and B is constant;
(5b): the pungency input in biostimulation neural network model and inhibition input [I i] +[I i] -come from the barrier in the target of formation and environment respectively, its computing formula is as follows:
Wherein, E is a constant and is far longer than constant B;
(5c): calculate each neuronic dynamic activity value according to biostimulation neural network model, can ensure in the place having barrier or other robot, neuronic dynamic activity value is minimum, and in the position of target, neuronic dynamic activity value is maximum, such robot calculates best formation path in real time according to the size of each neuronic dynamic activity value, and navigates, and concrete navigation procedure is as follows:
r) t+1=angle(p r,p n)
p n ⇐ s p n = max { s j , j = 1,2 , · · · , k }
Wherein, (θ r) t+1for the deflection of next step action of robot, angle (p r, p n) be calculating robot current location p rwith neuron p npoint-to-point transmission angle formulae, and p nfor the maximum of dynamic activity value in neurons all within the scope of robot probe;
(5d): along with the motion of robot, robot probe to environmental information change in the moment, according to the information of real-time change, continuous mobile biostimulation neural network model, rebuild environmental map, according to the thought utilizing mobile biostimulation Establishment of Neural Model real-time map, the movement locus of robot will be one and automatically can get around barrier, and can not bump against with other robot, the optimal path of required formation position can be arrived again fast.
2. the multirobot based on mobile biostimulation neural network according to claim 1 combines formation method, it is characterized in that: in described step (2), broadcast message specifically refers to, content and the form of broadcast message are as follows:
A={x,y,z,flag}
Wherein, (x, y, z) represents the three-dimensional location coordinates in environment; Flag represents the zone bit of its corresponding states, and the content of its correspondence is:
3. the multirobot based on mobile biostimulation neural network according to claim 1 combines formation method, it is characterized in that: the formation model based on leader-referenced algorithm in described step (4) specifically refers to:
(3a): if definition R 0for pilot robot, its actual target location coordinate is (x 0, y 0, z 0), R ibe i-th and follow robot, then its virtual target position coordinates is (x i, y i, z i), its computing formula is different according to the difference of formation task, and the formation of such as forming into columns is conplane linear formation, then the virtual target position calculation function of respectively following robot is:
f ( i ) = x i = x 0 + ( - 1 ) i × i 2 × γ × cos α y i = y 0 + ( - 1 ) i × i 2 × γ × sin α z i = z 0 I is even number;
f ( i ) = x i = x 0 + ( - 1 ) i × ( i - 1 ) 2 × γ × cos α y i = y 0 + ( - 1 ) i × ( i - 1 ) 2 × γ × sin α z i = z 0 I is odd number;
Wherein, α is the inclination angle of formation, α=0 during linear formation; γ is the distance between robot;
(3b): in forming into columns based on leader-referenced, the relative distance of respectively following between the coordinate position of virtual target coordinate position according to pilot robot of robot, the angle of formation, robot is determined; The task of pilot robot is constantly constantly moved towards the realistic objective of term of reference, and follow robot by obtaining the information of virtual target from pilot robot, constantly close to virtual target, thus while realizing overall flight pattern, move towards realistic objective.
4. the multirobot based on mobile biostimulation neural network according to claim 1 combines formation method, it is characterized in that: in described step (5), SOM self organizing neural network algorithm specifically refers to:
(4a): SOM self organizing neural network algorithm is divided into two-layer: input layer is the position of target, output layer comprises the coordinate of robot and arrives the path planning of target;
(4b): the computing formula based on SOM self organizing neural network algorithm is as follows:
[ N k , N m ] ⇐ min { D ikm , i = 1 , · · · , M ; k = 1 , · · · , K ; m = 1 , · · · , M ; and { k , m } ∈ Ω }
Wherein, [N k, N m] represent that kGe robot has been assigned to m target; D ikmfor Weighted distance function; K represents the number of robot; M represents the number of target; Ω represents the set of target and the robot be not assigned with; Here [N k, N m] be exactly according to D ikmthe value obtained time minimum; D ikmcomputing formula as follows:
D ikm=|T i-R km|(1+P)
Wherein, euclidean distance between expression task and robot; R km=(w kmx, w kmy), k=1 ..., K; M=1 ..., M represents the initial coordinate position of kGe robot; P is used to ensure that the workload of each robot is mean allocation, and its formula is:
P = L k - V 1 + V
Wherein, L krepresent the path of kGe robot to target; V represents that robot arrives the average path length of target;
(4c): the right value update formula of SOM self organizing neural network is:
R km(t+1)=R km(t)+h i(t)(T i(t)-R km(t))
Wherein, h it () is neighborhood function, calculated by triumph neuron i and other neuronic distances, can obtain a contiguous range of triumph neuron i; By continuous iteration and renewal, the distribution of the automatic optimal in robot is respectively being followed in final realize target position.
CN201210408924.6A 2012-10-24 2012-10-24 Multi-robot combined team-organizing method based on mobile biostimulation nerve network Active CN102915465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210408924.6A CN102915465B (en) 2012-10-24 2012-10-24 Multi-robot combined team-organizing method based on mobile biostimulation nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210408924.6A CN102915465B (en) 2012-10-24 2012-10-24 Multi-robot combined team-organizing method based on mobile biostimulation nerve network

Publications (2)

Publication Number Publication Date
CN102915465A CN102915465A (en) 2013-02-06
CN102915465B true CN102915465B (en) 2015-01-21

Family

ID=47613823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210408924.6A Active CN102915465B (en) 2012-10-24 2012-10-24 Multi-robot combined team-organizing method based on mobile biostimulation nerve network

Country Status (1)

Country Link
CN (1) CN102915465B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9776324B1 (en) 2016-03-25 2017-10-03 Locus Robotics Corporation Robot queueing in order-fulfillment operations
IL265713A (en) * 2019-03-28 2019-05-30 Shvalb Nir Multiple target interception

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220778B (en) * 2013-03-11 2015-08-19 哈尔滨工业大学 A kind of mobile node order switching method based on wireless sensor network and implement device
CN103179401A (en) * 2013-03-19 2013-06-26 燕山大学 Processing method and device for multi-agent cooperative video capturing and image stitching
CN103345246A (en) * 2013-05-29 2013-10-09 西北农林科技大学 Multi-robot system for real-time harvesting, transferring, drying and preserving of wheat
CN103324196A (en) * 2013-06-17 2013-09-25 南京邮电大学 Multi-robot path planning and coordination collision prevention method based on fuzzy logic
CN104238552B (en) * 2014-09-19 2017-05-17 南京理工大学 Redundancy multi-robot forming system
CN105045260A (en) * 2015-05-25 2015-11-11 湖南大学 Mobile robot path planning method in unknown dynamic environment
CN105589470A (en) * 2016-01-20 2016-05-18 浙江大学 Multi-UAVs distributed formation control method
CN105739505B (en) * 2016-04-13 2018-09-04 上海物景智能科技有限公司 A kind of controlling of path thereof and system of robot
CN106094835B (en) * 2016-08-01 2019-02-12 西北工业大学 The dynamic formation control method of front-wheel drive vehicle type mobile robot
CN106155057B (en) * 2016-08-05 2018-12-25 中南大学 A kind of clustered machine people's figure construction method based on self-organizing behavior
CN106527433B (en) * 2016-10-31 2019-04-05 江苏理工学院 Multirobot search and rescue system
CN106774318B (en) * 2016-12-14 2020-07-10 智易行科技(武汉)有限公司 Multi-agent interactive environment perception and path planning motion system
DE102016125224A1 (en) * 2016-12-21 2018-06-21 Vorwerk & Co. Interholding Gmbh Method for navigation and self-localization of an autonomously moving processing device
CN106647766A (en) * 2017-01-13 2017-05-10 广东工业大学 Robot cruise method and system based on complex environment UWB-vision interaction
CN106979785B (en) * 2017-03-24 2020-10-16 北京大学深圳研究生院 Complete traversal path planning method for multi-robot system
DE102017206987A1 (en) * 2017-04-26 2018-10-31 Bayerische Motoren Werke Aktiengesellschaft The method, computer program product, computer-readable medium, controller and vehicle include the controller for determining a collective maneuver of at least two vehicles
US10913604B2 (en) 2017-06-21 2021-02-09 Locus Robotics Corp. System and method for queuing robots destined for one or more processing stations
CN107598925A (en) * 2017-09-07 2018-01-19 南京昱晟机器人科技有限公司 A kind of robot cluster control method
CN108037771A (en) * 2017-12-07 2018-05-15 淮阴师范学院 A kind of more autonomous underwater robot search control systems and its method
CN108170147B (en) * 2017-12-31 2020-10-16 南京邮电大学 Unmanned aerial vehicle task planning method based on self-organizing neural network
US20200345325A1 (en) * 2018-01-19 2020-11-05 Koninklijke Philips N.V. Automated path correction during multi-modal fusion targeted biopsy
CN108415425B (en) * 2018-02-08 2020-10-30 东华大学 Distributed swarm robot cooperative clustering algorithm based on improved gene regulation and control network
CN108427283A (en) * 2018-04-04 2018-08-21 浙江工贸职业技术学院 A kind of control method that the compartment intellect service robot based on neural network is advanced
KR102100476B1 (en) 2018-05-04 2020-05-26 엘지전자 주식회사 A plurality of robot cleaner and a controlling method for the same
WO2019212239A1 (en) 2018-05-04 2019-11-07 Lg Electronics Inc. A plurality of robot cleaner and a controlling method for the same
WO2019212240A1 (en) 2018-05-04 2019-11-07 Lg Electronics Inc. A plurality of robot cleaner and a controlling method for the same
KR102067603B1 (en) * 2018-05-04 2020-01-17 엘지전자 주식회사 A plurality of robot cleaner and a controlling method for the same
CN108639177A (en) * 2018-05-09 2018-10-12 南京赫曼机器人自动化有限公司 A kind of autonomous full traversal climbing robot
CN108594824A (en) * 2018-05-23 2018-09-28 南京航空航天大学 A kind of platooning's device and method of view-based access control model navigation and ultrasonic array
CN108985580B (en) * 2018-06-16 2022-09-02 齐齐哈尔大学 Multi-robot disaster search and rescue task allocation method based on improved BP neural network
CN109324611A (en) * 2018-09-12 2019-02-12 中国人民解放军国防科技大学 Group robot rapid formation method based on basic behavior self-organization
CN111179457A (en) * 2018-11-09 2020-05-19 许文亮 Inspection system and inspection method for industrial equipment
CN111727414B (en) * 2018-12-27 2023-09-15 配天机器人技术有限公司 Robot control method, control system, robot and storage device
CN109799829B (en) * 2019-02-28 2020-06-02 清华大学 Robot group cooperative active sensing method based on self-organizing mapping
CN110095120A (en) * 2019-04-03 2019-08-06 河海大学 Biology of the Autonomous Underwater aircraft under ocean circulation inspires Self-organizing Maps paths planning method
CN110244748B (en) * 2019-06-27 2022-05-06 浙江海洋大学 Underwater target detection system and detection method
CN110727272B (en) * 2019-11-11 2023-04-18 广州赛特智能科技有限公司 Path planning and scheduling system and method for multiple robots
CN110737263B (en) * 2019-11-21 2023-04-07 中科探海(苏州)海洋科技有限责任公司 Multi-robot formation control method based on artificial immunity
CN112985372A (en) * 2019-12-13 2021-06-18 南宁富桂精密工业有限公司 Path planning system and method thereof
CN111198567B (en) * 2020-01-17 2021-06-01 北京大学 Multi-AGV collaborative dynamic tracking method and device
CN111781922B (en) * 2020-06-15 2021-10-26 中山大学 Multi-robot collaborative navigation method based on deep reinforcement learning
CN111766879A (en) * 2020-06-24 2020-10-13 天津大学 Intelligent vehicle formation system based on autonomous collaborative navigation
CN111982094B (en) * 2020-08-25 2022-06-07 北京京东乾石科技有限公司 Navigation method, device and system thereof and mobile equipment
CN113387099B (en) * 2021-06-30 2023-01-10 深圳市海柔创新科技有限公司 Map construction method, map construction device, map construction equipment, warehousing system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101413806A (en) * 2008-11-07 2009-04-22 湖南大学 Mobile robot grating map creating method of real-time data fusion
CN101549498A (en) * 2009-04-23 2009-10-07 上海交通大学 Automatic tracking and navigation system of intelligent aid type walking robots
CN101650568A (en) * 2009-09-04 2010-02-17 湖南大学 Method for ensuring navigation safety of mobile robots in unknown environments
CN101976079A (en) * 2010-08-27 2011-02-16 中国农业大学 Intelligent navigation control system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101413806A (en) * 2008-11-07 2009-04-22 湖南大学 Mobile robot grating map creating method of real-time data fusion
CN101549498A (en) * 2009-04-23 2009-10-07 上海交通大学 Automatic tracking and navigation system of intelligent aid type walking robots
CN101650568A (en) * 2009-09-04 2010-02-17 湖南大学 Method for ensuring navigation safety of mobile robots in unknown environments
CN101976079A (en) * 2010-08-27 2011-02-16 中国农业大学 Intelligent navigation control system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于强化学习和群集智能方法的多机器人协作协调研究";王醒策;《信息科技辑》;20051205;参见第3章第41-78页,及图3.1-3.10;和第4章4.2节第91-100页 *
"多机器人动态编队的强化学习算法研究";王醒策等;《计算机研究与发展》;20031031;第40卷(第10期);全文 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9776324B1 (en) 2016-03-25 2017-10-03 Locus Robotics Corporation Robot queueing in order-fulfillment operations
IL265713A (en) * 2019-03-28 2019-05-30 Shvalb Nir Multiple target interception

Also Published As

Publication number Publication date
CN102915465A (en) 2013-02-06

Similar Documents

Publication Publication Date Title
CN102915465B (en) Multi-robot combined team-organizing method based on mobile biostimulation nerve network
CN104714551B (en) Indoor area covering method suitable for vehicle type mobile robot
CN104501816A (en) Multi-unmanned aerial vehicle coordination and collision avoidance guide planning method
CN102915039B (en) A kind of multirobot joint objective method for searching of imitative animal spatial cognition
Tokekar et al. Multi-target visual tracking with aerial robots
CN103926925B (en) Improved VFH algorithm-based positioning and obstacle avoidance method and robot
CN106873599A (en) Unmanned bicycle paths planning method based on ant group algorithm and polar coordinate transform
Vidal et al. Pursuit-evasion games with unmanned ground and aerial vehicles
CN105955273A (en) Indoor robot navigation system and method
CN104298239A (en) Enhanced map learning path planning method for indoor mobile robot
CN109863513A (en) Nerve network system for autonomous vehicle control
CN106444769A (en) Method for planning optimal path for incremental environment information sampling of indoor mobile robot
Ohki et al. Collision avoidance method for mobile robot considering motion and personal spaces of evacuees
CN106527438A (en) Robot navigation control method and device
Chen et al. Tracking with UAV using tangent-plus-Lyapunov vector field guidance
Majumder et al. Three dimensional D* algorithm for incremental path planning in uncooperative environment
CN114442621A (en) Autonomous exploration and mapping system based on quadruped robot
CN113009912A (en) Low-speed commercial unmanned vehicle path planning calculation method based on mixed A star
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
Lei et al. Multitask allocation framework with spatial dislocation collision avoidance for multiple aerial robots
Xin et al. Coordinated motion planning of multiple robots in multi-point dynamic aggregation task
CN103309351A (en) Maintenance robot obstacle avoidance planning method
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
Jia et al. BP neural network based localization for a front-wheel drive and differential steering mobile robot
Papatheodorou et al. Theoretical and experimental collaborative area coverage schemes using mobile agents

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant