CN114137955A - Multi-robot rapid collaborative map building method based on improved market method - Google Patents

Multi-robot rapid collaborative map building method based on improved market method Download PDF

Info

Publication number
CN114137955A
CN114137955A CN202111252038.4A CN202111252038A CN114137955A CN 114137955 A CN114137955 A CN 114137955A CN 202111252038 A CN202111252038 A CN 202111252038A CN 114137955 A CN114137955 A CN 114137955A
Authority
CN
China
Prior art keywords
robot
map
task
robots
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111252038.4A
Other languages
Chinese (zh)
Other versions
CN114137955B (en
Inventor
桂健钧
喻天佑
姚雯
朱效洲
邓宝松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN202111252038.4A priority Critical patent/CN114137955B/en
Publication of CN114137955A publication Critical patent/CN114137955A/en
Application granted granted Critical
Publication of CN114137955B publication Critical patent/CN114137955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Abstract

The invention discloses a multi-robot rapid collaborative map building method based on an improved market method, which comprises the following steps: randomly selecting a robot and generating an initial main map; the robot scans the environment, and iteratively updates the initial main map to generate a new main map; acquiring boundary points between a known area and an unknown area in a new main map, and adding the boundary points into a task set to be auctioned; the robot auction tasks in the task set to be auctioned, wherein the bidding mode is based on the path distance and the position of the robot, the robot sequences the tasks obtained by auction to obtain a task list, and a first task in the task list is set as a current target point; each robot travels to a respective current target point; and when the current target point is reached, scanning the environment, and circulating the steps. Therefore, the method can improve the efficiency of multi-robot collaborative search and can realize efficient collaborative search for large indoor structuralization and open environments.

Description

Multi-robot rapid collaborative map building method based on improved market method
Technical Field
The invention relates to the technical field of multi-robot collaborative map building, in particular to a multi-robot rapid collaborative map building method based on an improved market method.
Background
SLAM (simultaneous localization and mapping, instant positioning and map construction) is a basic problem and a research hotspot in the research field of mobile robots, is also a key point for realizing autonomous navigation of mobile robots, solves the map building problem by using multiple robots, and relates to the fields of machine vision, information filtering, nonlinear optimization and the like. The problem of multi-robot collaboration based map construction is gaining increasing attention in recent years. Compared with a single robot, the robot has the advantages of high efficiency, high precision, high robustness, low cost and the like when a plurality of robots are used for collaborative mapping, so that the robot is more suitable for being used in actual complex scenes.
For example, in the prior art, there are many methods for improving the accuracy of map building for instantaneous positioning of a single robot, such as an SLAM method based on extended kalman filtering and particle filtering. When enough time, sufficient data and sufficient calculation power exist, the accuracy of the graph can be better emphasized, but when the situation is urgent, such as disaster relief, emergency search and the like, the traversal of the designated area is realized as soon as possible while certain accuracy is ensured. In the process of the multi-robot operation, if the multi-robot has a fault, other robots in the group can replace the fault robot to complete the work. Especially, under an unknown environment, when large space exploration of a complex structure is executed, the risk of task failure can be reduced through cooperation of multiple robots. Therefore, the multi-robot cooperative SLAM can improve the searching or detecting speed and improve the task completing efficiency.
In the past research of multi-robot task collaboration, researchers often simply perform allocation decisions based on task volume, and lack comprehensive consideration of information such as task environment and cluster state. On the other hand, the application of the market method does not form a set of algorithm process with complete theory and definite flow. Therefore, the conventional method is not efficient in map building, and a method for improving the map building efficiency by cooperatively building a map by multiple robots cannot be provided.
Disclosure of Invention
In order to solve part or all of the technical problems in the prior art, the invention provides a multi-robot rapid collaborative map building method based on an improved market method.
The technical scheme of the invention is as follows:
a multi-robot rapid collaborative map building method based on an improved market method comprises the following steps:
s1: initializing a robot cluster, randomly selecting one robot from the robot cluster with a plurality of robots, and generating an initial main map by using a visual driving algorithm based on the visual angle of the randomly selected robot;
s2: establishing a map, wherein each robot in the robot cluster scans respective environment, and the initial main map is iteratively updated based on all scanned environments to generate a new main map;
s3: searching, namely acquiring a boundary point between a known area and an unknown area in the new main map, and adding the currently acquired boundary point into a task set to be auctioned;
s4: task allocation, namely traversing the task set to be auctioned, so that all robots in the robot cluster auction tasks in the task set to be auctioned, wherein the bidding mode of the robots is based on the path distance and the positions of the robots, after the robots allocate the tasks, the robots sort the tasks obtained through auction according to the current value to obtain a task list of the robots, and then the robots set a first task in the task list of the robots as a current target point;
s5: executing, wherein each robot independently travels to the current target point;
s6: when the robot reaches its current target point, the operation in S2 is executed in return, and the loop is repeated until there is no new boundary point in S3.
Optionally, in the S1, the method further includes: and initializing a team, and placing a plurality of robots in the robot cluster at different positions in a mutually effective perception range.
Optionally, in the S1, the generating an initial master map based on the randomly selected perspective of the robot includes: based on the binocular cameras of the selected robots, an initial master map of the occupancy grid type is generated using the orbslam2 toolkit in combination with the octomap toolkit.
Optionally, in the S2, including: each robot in the robot cluster acquires peripheral image information by using a binocular camera, calculates scene depth in real time, acquires a scene depth map, performs self-positioning by combining the initial main map according to the respective current scene depth map, generates respective occupation grids of the current position and the sensing area by using an orbslam2 tool pack and an octomap tool pack, and updates the occupation grids to the initial main map so as to generate a new main map.
Optionally, in the S4, the method includes deleting the task that no longer belongs to the boundary point from the task list, and updating the task list.
Optionally, in the S4, each robot in the robot group can be assigned to a respective task.
The technical scheme of the invention has the following main advantages:
the multi-robot rapid collaborative mapping method based on the improved market method, disclosed by the invention, is used for carrying out theoretical modeling and method optimization based on the market method, and combining the calculation factors of team cooperation values, the robot cluster can be effectively configured, so that each robot in the robot cluster can obtain a task list matched with the position and the target path distance of the robot cluster, and can execute tasks according to the task list, and effectively carry out environment exploration and image information acquisition. Compared with the prior art, the method can improve the efficiency of multi-robot collaborative search, enables the system to have stronger robustness to the environment, and can realize efficient collaborative search for large indoor structuralization and open environment.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic step diagram of a multi-robot fast collaborative map building method based on an improved market method according to an embodiment of the present invention:
FIG. 2 is a schematic flow diagram of the method shown in FIG. 1;
FIG. 3 is a schematic diagram of a distribution of boundary points for a method according to an embodiment of the invention;
FIG. 4 is a schematic flow diagram of a task auction according to a method in one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
As known by the technical personnel in the field, the main content of the multi-robot collaborative mapping is autonomous exploration and collaborative mapping. The autonomous exploration method comprises a path planning method, a boundary point method, a space point sampling method and the like, and the difficulty is how to realize task allocation among multiple robots so as to realize efficient cooperation; the mapping method has many academic researches and more extensive applications in engineering, such as the mapping toolkit of the ROS robot operating system. The effective multi-robot cooperative SLAM method can improve the task allocation efficiency and shorten the area exploration time. The invention aims to provide a task allocation method for multi-robot collaborative map construction. In academic research, the problems are understood as multi-objective optimization, and optimization solution is carried out on the basis of a genetic algorithm and an ant colony algorithm framework by constructing a multi-variable objective function. Some methods, such as region segmentation, which focus on processing the search region, can also improve the task allocation efficiency.
The method is based on improvement and application of a market auction process, has a good foundation for efficiently executing allocation calculation, is clear in cluster collaborative logic, and can improve the efficiency from decision-making to control.
The technical solution provided by the embodiments of the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1 to 4, in an embodiment according to the present invention, a method for multi-robot fast collaborative mapping based on an improved market method is provided, which performs theoretical modeling and method optimization based on the market method, and combines a calculation factor of team cooperation value, so that each robot has its own task list.
In the present embodiment, as shown in fig. 1 and 2, the method includes:
s1: initializing a robot cluster, randomly selecting one robot from the robot cluster with a plurality of robots, and generating an initial main map by using a visual driving algorithm based on the visual angle of the randomly selected robot;
in S1, it is necessary to initialize the team and initialize the master map. Specifically, the method comprises the following steps:
and (4) initializing a team, namely, placing a plurality of robots in the robot cluster at different positions in a mutually effective perception range. That is, on the premise that the robots can sense each other, the plurality of robots are distributed at different and random positions as much as possible, and the detection efficiency is improved.
For example, when n robots in a robot cluster are placed at any position within a mutually effective sensing range, the robot team with n members may be represented as:
RT={r1,r2,r3,....,rn};
wherein ,riRepresenting the ith robot. For example, the 4 th robot may be denoted as r4
The master map is initialized, i.e. one robot is randomly selected among a cluster of robots, and based on the binocular cameras of the selected robot, an occupancy grid type initial master map is generated using the orbslam2 toolkit in combination with the octomap toolkit.
Illustratively, when a robot is randomly selected, the coordinate origin of the current camera of the robot is taken as the world coordinate system origin, and a world coordinate system is established. Then, the binocular camera of the robot is used to calculate a depth image, and the obtained depth image is converted into a point cloud under the world coordinate system, using the oct map given in the oct map package: : pointclosed data structures to insert point clouds.
And meanwhile, generating a light beam model by using the current camera position of the robot and the position of the point cloud under the world coordinate system. Specifically, the starting point of the light ray is the coordinates of the current camera of the robot, the end point is the coordinates of the point cloud, the occupancy probability of the point through which the light beam passes is low, the state of the light beam is considered to be idle (non-occupied), and the light ray end point state is considered to be occupied. Thereby, one occupancy grid type main map can be initialized.
In this embodiment, the method further includes:
s2: establishing a map, wherein each robot in the robot cluster scans respective environment, and the initial main map is iteratively updated based on all scanned environments to generate a new main map;
specifically, each robot in the robot cluster acquires peripheral image information by using a binocular camera, calculates scene depth in real time, acquires a scene depth map, performs self-positioning by combining the initial master map according to the respective current scene depth map, generates respective occupation grids of the current position and the sensing area by using an orbslam2 tool pack and an octomap tool pack, and updates the occupation grids to the initial master map to generate a new master map.
Exemplarily, in the process of moving in space, each robot can use its own binocular camera to obtain its own peripheral image information, and can solve the scene depth in real time to obtain a scene depth map, and then, on the basis of the orbslam2 packet, after solving the keyframe camera pose and the keyframe point cloud at the world coordinate system position, the same oct map is used: : and inserting point cloud by using the pointcloud data structure, and judging the grid occupation state by using the beam model. It can be understood that the process of updating the initial master map to the new master map is as follows: multiple robots generate respective keyframe point clouds and insert them into the master map.
It should be noted that the method according to the present embodiment is mainly applied to map search on a two-dimensional horizontal plane, and therefore, only points on a given horizontal line in the depth image screen are selected to generate a point cloud when creating a map. For map exploration of three-dimensional solid, similar extension can be adopted, for example, two-dimensional squares can be replaced by three-dimensional squares.
In this embodiment, the method further includes:
s3: searching, namely acquiring a boundary point between a known area and an unknown area in the new main map, and adding the currently acquired boundary point into a task set to be auctioned;
specifically, the robot in the method needs to search from a known area to an unknown area, and a boundary point exists between the known area and the unknown area, and in S3, in order to search for the unknown area more efficiently, the boundary point needs to be acquired as a task point to be assigned based on a known new master map.
Illustratively, as shown in fig. 3, the boundary points are extracted as follows:
in fig. 3, the occupancy grid map is composed of three parts, a grid of a black part indicates an occupied area, a grid of a white part indicates an empty area, and a grid of a gray part indicates an unknown area. The part marked with "f" in the grid is the boundary point frontiers, which can be found by the variable "chageKeys" in the oct packet. Point set S with changed ergodic stateCThe variable "chageKeys". From SCThe points with unknown points in the four neighborhoods are picked out, and the picked points are boundary points frontiers.
Order SFRepresenting a set of boundary points frontiers, SoccpuyRepresenting a set of points occupied by the status, P representing a point in the grid map, SunknownRepresenting a set of points whose state is unknown, 5(P)4 representingAnd (3) a set of four adjacent points in the upper, lower, left and right directions around the point P. Then SFCan be expressed as:
Figure BDA0003320876320000051
in this embodiment, the method further includes:
s4: task allocation, namely traversing the task set to be auctioned, so that all robots in the robot cluster auction tasks in the task set to be auctioned, wherein the bidding mode of the robots is based on the path distance and the positions of the robots, after the robots allocate the tasks, the robots sort the tasks obtained through auction according to the current value to obtain a task list of the robots, and then the robots set a first task in the task list of the robots as a current target point;
specifically, in S3, the relevant auction task set is already acquired based on the boundary point, and in order to enable the robot to acquire the task of the optimal boundary point, the tasks in the auction task set may be auctioned based on the path distance between the robot and the boundary point and the position of the robot, and after the robot has obtained the relevant tasks through auction, the tasks obtained through auction may be sorted from small to large according to the current value, and a task list may be generated. The task list is a task list that the robot needs to execute, wherein the robot can set a first task in the task list as a current target point.
Further, in order to avoid repeated execution of tasks, the robot needs to delete tasks that no longer belong to the boundary point from the task list and needs to update the task list.
Furthermore, in order to increase the utilization of the robots, each robot in the robot cluster can be assigned to a respective task. Of course, based on the path distance of the task and the position of the robot, some robots may be in a dormant state to be assigned.
To more specifically describe the steps of task assignment in the present method, the following will be further described by a specific embodiment:
the task allocation step in the method may include:
the first step is as follows: after the boundary points frontiers of the current new main map are acquired, a set of all the acquired boundary points frontiers is taken as a task set T, which may be represented as:
T={t1,t2,t3,....,tm};
wherein ,tiIndicating the ith task point.
The second step is that: auctioning all task points in a task set T, wherein each task point TiOne robot r is required to be used as a winner (t) of the auctioni) When the auction is completed, the task point t is needed to be setiAnd adding the data into a task list of the auction master. The auction masters may be represented as:
winner(ti)=arg minr(auctionprice(r,ti));
the auction price auctionprice is given according to the information of the robot and the task. The auction price may be expressed as:
Figure BDA0003320876320000061
wherein ,
Figure BDA0003320876320000062
c=weight*crad(PSR(r)∩RA(path(r,t)[k]));
in the above equation, path (r, t) [ k ] means the kth point in the path from r to t. N is the number of points in the path. In the present embodiment, the value of weight is 4.
When deciding the auction price auctionprice of a robot, the position set psr (robot) of other robots in the robot cluster except the self-robot and the point set ra (p) that will be included in the exclusion area are needed.
The location set psr (robot) and point set ra (p) can be represented as:
PSR(robot)={(x,y)|∑(x(r),y(r)),r∈(RT\robot)};
RA(p)={(x,y)|(x-x(p))2+(y-y(p))2<=(K*Rlaser)2};
wherein x (R) means the x-coordinate of robot R, y (R) means the y-coordinate of robot R, x (p) means the x-coordinate of point p, y (p) means the y-coordinate of point p, K is a constant, R islaserThe radius scanned by the robotic sensor.
In order to more clearly show the auction mode in the method, the process of task auction, winner decision and task elimination is shown in FIG. 4.
Specifically, in FIG. 4, t is1To tmFor all the task points to be auctioned, r1To rnA robot waiting for an auction. In step (a), the robot r1To rnCommon auction first task point t1In step (b), a winner is determined, r2The task point t is obtained by auction1In step (c), the task point t1Is removed from the task set T, and the robot r1To rnAuction a second task point t again2. Therefore, all task points in the task set T are sequentially auctioned by the robot until all task points are auctioned.
And thirdly, after the auction is finished, arranging the tasks in the task list of each robot in an ascending order according to the current value, simultaneously, adjusting the task list of each robot in real time by the robot, removing the task points which do not belong to the boundary points frontiers from the task list of the robot, and taking the first task point in the current task list as the current target point to be executed next.
It is understood that, if all tasks of one of the robots are executed, the task list will be displayed as empty, and in this case, in order to effectively improve the utilization rate, the last task in the task list of the robot with the largest number of tasks may be handed over to the robot.
In this embodiment, the method further includes:
s5: executing, wherein each robot independently travels to the current target point;
specifically, the robot has set the first task in the task list as the current target point, and the robot may advance to the current target point, executing the tasks of the environment exploration and scanning.
In this embodiment, the method further includes:
s6: when the robot reaches its current target point, the operation in S2 is executed in return, and the loop is repeated until there is no new boundary point in S3.
Specifically, after the robot moves to the current target point set by itself, the environment of the current target point may be scanned until all image information of the current environment is acquired. Thereafter, the previous master map is updated based on the obtained new image information, thereby obtaining a new master map. Then, S3 is executed in a loop until the task lists of all robots are empty, which means that there is no new boundary point in the current task, and there is no unknown area to be searched, and the robot no longer needs to execute the task of searching the map.
The multi-robot rapid collaborative map building method based on the improved market method in the embodiment has the following advantages:
in the multi-robot rapid collaborative mapping method based on the improved market method, theoretical modeling and method optimization are performed based on the market method, and the calculation factors of the team cooperation value are combined, so that the robot cluster can be effectively configured, each robot in the robot cluster can obtain a task list matched with the position and the target path distance of the robot cluster, and can execute tasks according to the task list, and environment exploration and image information acquisition can be effectively performed. Compared with the prior art, the method can improve the efficiency of multi-robot collaborative search, enables the robustness of the system to the environmental robustness to be stronger, and can realize efficient collaborative search for large indoor structuralization and open environments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. In addition, "front", "rear", "left", "right", "upper" and "lower" in this document are referred to the placement states shown in the drawings.
Finally, it should be noted that: the above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A multi-robot rapid collaborative map building method based on an improved market method is characterized by comprising the following steps:
s1: initializing a robot cluster, randomly selecting one robot from the robot cluster with a plurality of robots, and generating an initial main map by using a visual driving algorithm based on the visual angle of the randomly selected robot;
s2: establishing a map, wherein each robot in the robot cluster scans respective environment, and the initial main map is iteratively updated based on all scanned environments to generate a new main map;
s3: searching, namely acquiring a boundary point between a known area and an unknown area in the new main map, and adding the currently acquired boundary point into a task set to be auctioned;
s4: task allocation, namely traversing the task set to be auctioned, so that all robots in the robot cluster auction tasks in the task set to be auctioned, wherein the bidding mode of the robots is based on the path distance and the positions of the robots, after the robots allocate the tasks, the robots sort the tasks obtained through auction according to the current value to obtain a task list of the robots, and then the robots set a first task in the task list of the robots as a current target point;
s5: executing, wherein each robot independently travels to the current target point;
s6: when the robot reaches its current target point, the operation in S2 is executed in return, and the loop is repeated until there is no new boundary point in S3.
2. The multi-robot fast collaborative map creation method based on the improved market method according to claim 1, wherein in the S1, further comprising: and initializing a team, and placing a plurality of robots in the robot cluster at different positions in a mutually effective perception range.
3. The improved marketing method-based multi-robot fast collaborative map creation method according to claim 2, wherein in the S1, the generating an initial master map based on the randomly selected perspective of the robot includes: based on the binocular cameras of the selected robots, an initial master map of the occupancy grid type is generated using the orbslam2 toolkit in combination with the octomap toolkit.
4. The multi-robot fast collaborative map creation method based on the improved market method according to claim 3, wherein in the S2 includes: each robot in the robot cluster acquires peripheral image information by using a binocular camera, calculates scene depth in real time, acquires a scene depth map, performs self-positioning by combining the initial main map according to the respective current scene depth map, generates respective occupation grids of the current position and the sensing area by using an orbslam2 tool pack and an octomap tool pack, and updates the occupation grids to the initial main map so as to generate a new main map.
5. The improved marketing method-based multi-robot fast collaborative map creation method according to claim 4, wherein in the step S4, tasks that no longer belong to boundary points are deleted from the task list, and the task list is updated.
6. The improved marketing method-based multi-robot fast collaborative mapping method according to claim 4, wherein in the step S4, each robot in the robot group is capable of being assigned to a respective task.
CN202111252038.4A 2021-10-26 2021-10-26 Multi-robot rapid collaborative mapping method based on improved market method Active CN114137955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111252038.4A CN114137955B (en) 2021-10-26 2021-10-26 Multi-robot rapid collaborative mapping method based on improved market method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111252038.4A CN114137955B (en) 2021-10-26 2021-10-26 Multi-robot rapid collaborative mapping method based on improved market method

Publications (2)

Publication Number Publication Date
CN114137955A true CN114137955A (en) 2022-03-04
CN114137955B CN114137955B (en) 2023-04-28

Family

ID=80395193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111252038.4A Active CN114137955B (en) 2021-10-26 2021-10-26 Multi-robot rapid collaborative mapping method based on improved market method

Country Status (1)

Country Link
CN (1) CN114137955B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130204437A1 (en) * 2003-12-12 2013-08-08 Vision Robotics Corporation Agricultural robot system and method
CN107657364A (en) * 2017-09-06 2018-02-02 中南大学 A kind of overloading AGV tasks towards tobacco plant material transportation distribute forming method
EP3360502A2 (en) * 2017-01-18 2018-08-15 KB Medical SA Robotic navigation of robotic surgical systems
CN108416488A (en) * 2017-12-21 2018-08-17 中南大学 A kind of more intelligent robot method for allocating tasks towards dynamic task
CN108873908A (en) * 2018-07-12 2018-11-23 重庆大学 The robot city navigation system that view-based access control model SLAM and network map combine
CN109407680A (en) * 2018-12-28 2019-03-01 大连海事大学 The distributed object collaborative allocation of unmanned boat formation reconfiguration
CN109426884A (en) * 2017-08-28 2019-03-05 杭州海康机器人技术有限公司 Allocation plan determines method, apparatus and computer readable storage medium
CN111461488A (en) * 2020-03-03 2020-07-28 北京理工大学 Multi-robot distributed cooperative task allocation method facing workshop carrying problem
WO2020190983A1 (en) * 2019-03-18 2020-09-24 Simpsx Technologies Llc Renewable energy community objects with price-time priority queues for transformed renewable energy units
CN111862148A (en) * 2020-06-05 2020-10-30 中国人民解放军军事科学院国防科技创新研究院 Method, device, electronic equipment and medium for realizing visual tracking
CN111882607A (en) * 2020-07-14 2020-11-03 中国人民解放军军事科学院国防科技创新研究院 Visual inertial navigation fusion pose estimation method suitable for augmented reality application
CN112540609A (en) * 2020-07-30 2021-03-23 深圳优地科技有限公司 Path planning method and device, terminal equipment and storage medium
KR20210063791A (en) * 2019-11-25 2021-06-02 한국기술교육대학교 산학협력단 System for mapless navigation based on dqn and slam considering characteristic of obstacle and processing method thereof
CN113110455A (en) * 2021-04-16 2021-07-13 哈尔滨工业大学 Multi-robot collaborative exploration method, device and system for unknown initial state
TWI742737B (en) * 2020-06-24 2021-10-11 王彰榮 Real-time statistical computing system of market value for custom search area's surroundings

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130204437A1 (en) * 2003-12-12 2013-08-08 Vision Robotics Corporation Agricultural robot system and method
EP3360502A2 (en) * 2017-01-18 2018-08-15 KB Medical SA Robotic navigation of robotic surgical systems
CN109426884A (en) * 2017-08-28 2019-03-05 杭州海康机器人技术有限公司 Allocation plan determines method, apparatus and computer readable storage medium
CN107657364A (en) * 2017-09-06 2018-02-02 中南大学 A kind of overloading AGV tasks towards tobacco plant material transportation distribute forming method
CN108416488A (en) * 2017-12-21 2018-08-17 中南大学 A kind of more intelligent robot method for allocating tasks towards dynamic task
CN108873908A (en) * 2018-07-12 2018-11-23 重庆大学 The robot city navigation system that view-based access control model SLAM and network map combine
CN109407680A (en) * 2018-12-28 2019-03-01 大连海事大学 The distributed object collaborative allocation of unmanned boat formation reconfiguration
WO2020190983A1 (en) * 2019-03-18 2020-09-24 Simpsx Technologies Llc Renewable energy community objects with price-time priority queues for transformed renewable energy units
KR20210063791A (en) * 2019-11-25 2021-06-02 한국기술교육대학교 산학협력단 System for mapless navigation based on dqn and slam considering characteristic of obstacle and processing method thereof
CN111461488A (en) * 2020-03-03 2020-07-28 北京理工大学 Multi-robot distributed cooperative task allocation method facing workshop carrying problem
CN111862148A (en) * 2020-06-05 2020-10-30 中国人民解放军军事科学院国防科技创新研究院 Method, device, electronic equipment and medium for realizing visual tracking
TWI742737B (en) * 2020-06-24 2021-10-11 王彰榮 Real-time statistical computing system of market value for custom search area's surroundings
CN111882607A (en) * 2020-07-14 2020-11-03 中国人民解放军军事科学院国防科技创新研究院 Visual inertial navigation fusion pose estimation method suitable for augmented reality application
CN112540609A (en) * 2020-07-30 2021-03-23 深圳优地科技有限公司 Path planning method and device, terminal equipment and storage medium
CN113110455A (en) * 2021-04-16 2021-07-13 哈尔滨工业大学 Multi-robot collaborative exploration method, device and system for unknown initial state

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
梁璨: "多机器人协作探索环境和地图构建系统设计与实现", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
董晓明 等: "《海上无人装备体系概览》", 哈尔滨工程大学出版社 *
赵旭: "未知环境中多机器人地图构建研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈昱名: "视觉慢导融合的机器人三维地图创建技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Also Published As

Publication number Publication date
CN114137955B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN112859859B (en) Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping
CN110531760B (en) Boundary exploration autonomous mapping method based on curve fitting and target point neighborhood planning
Bassier et al. Classification of sensor independent point cloud data of building objects using random forests
CN109159127A (en) A kind of double welding robot intelligence paths planning methods based on ant group algorithm
CN108334080A (en) A kind of virtual wall automatic generation method for robot navigation
CN110531770A (en) One kind being based on improved RRT paths planning method and system
CN109744945A (en) A kind of area attribute determines method, apparatus, system and electronic equipment
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
Kim et al. UAV-UGV cooperative 3D environmental mapping
Geng et al. UAV surveillance mission planning with gimbaled sensors
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN111161334A (en) Semantic map construction method based on deep learning
CN112052847A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Li et al. Improving autonomous exploration using reduced approximated generalized voronoi graphs
CA3215518A1 (en) Generating mappings of physical spaces from point cloud data
CN114137955A (en) Multi-robot rapid collaborative map building method based on improved market method
Zhang et al. 3D reconstruction of weak feature indoor scenes based on hector SLAM and floorplan generation
CN115855086A (en) Indoor scene autonomous reconstruction method, system and medium based on self-rotation
CN114967694A (en) Mobile robot collaborative environment exploration method
Cupec et al. Segmentation of depth images into objects based on local and global convexity
Wang et al. Multi-robot environment exploration based on label maps building via recognition of frontiers
Zhang et al. Coverage enhancement for deployment of multi-camera networks
CN117058358B (en) Scene boundary detection method and mobile platform
Joldic et al. Laboratory Environment for Algorithms Testing in Mobile Robotics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant