CN117422774A - Autonomous robot mapping method, device and system, electronic equipment and readable storage medium - Google Patents

Autonomous robot mapping method, device and system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117422774A
CN117422774A CN202311421414.7A CN202311421414A CN117422774A CN 117422774 A CN117422774 A CN 117422774A CN 202311421414 A CN202311421414 A CN 202311421414A CN 117422774 A CN117422774 A CN 117422774A
Authority
CN
China
Prior art keywords
map
sub
area
dimensional code
color block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311421414.7A
Other languages
Chinese (zh)
Inventor
李北辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Qiyuan Robot Co ltd
Original Assignee
Zhejiang Qiyuan Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Qiyuan Robot Co ltd filed Critical Zhejiang Qiyuan Robot Co ltd
Priority to CN202311421414.7A priority Critical patent/CN117422774A/en
Publication of CN117422774A publication Critical patent/CN117422774A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot autonomous mapping method, which is characterized in that a robot performs autonomous mapping on a mapping area to be mapped, wherein the mapping area comprises a reference area and at least one mapping area to be mapped, the mapping area to be mapped and the reference area have a public area, a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of a robot body; when the reference area and the sub-area to be built are subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through the camera arranged on the robot, the coordinates of each two-dimensional code/color block label on the reference map/sub-area map and the corresponding virtual wall grids are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the reference map/sub-area map to form a complete virtual wall on the public area.

Description

Autonomous robot mapping method, device and system, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of positioning and mapping technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a computer readable storage medium for autonomous mapping of a robot.
Background
Robots perform various tasks, such as navigation, transmission, searching, cleaning, etc., and generally need to construct a map for their working environment and determine their own position in the environment, which requires simultaneous localization and mapping (Simultaneous Localization andMapping, SLAM) technology. Among them, SLAM technology based on lidar has many advantages, and is one of the most attention directions of SLAM at present. Currently, there are many open-source laser mapping overall solutions on the market, such as Gmapping, hector, karto, cartographer, which can be classified into filter-based and graph-based optimization.
The inventor finds that the existing autonomous mapping scheme based on the robot has the following defects in the embodiment of the invention: (1) For map construction of a large scene (large area), a single robot is adopted to carry out one-time map construction on the whole scene, the time spent is very large, and the working efficiency is very low; if broken in the process of drawing construction, drawing construction failure possibly causes the need of drawing reconstruction; and the larger the scene size, the larger the accumulated errors, thereby affecting the accuracy of the final generated complete map. (2) When the autonomous mapping is carried out, the movable area of the robot is limited by combining the wall body with the physical barrier, namely, the wall body and the physical barrier are used as the autonomous mapping movable boundary of the robot, so that on one hand, the physical barrier is inconvenient to put, and on the other hand, if the main mapping movable boundary is larger, the number of the physical barriers to be put is larger.
Disclosure of Invention
The invention aims to provide a robot autonomous mapping method, a robot autonomous mapping device, a robot autonomous mapping system, electronic equipment and a computer readable storage medium, which can effectively solve the technical problems in the prior art.
On the one hand, the embodiment of the application discloses a robot autonomous mapping method, which is suitable for autonomous mapping of a mapping area to be mapped by a robot, wherein the mapping area to be mapped comprises a reference area and at least one mapping subarea to be mapped, the mapping subarea to be mapped and the reference area have a public area, a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of a robot body; the autonomous robot mapping method comprises the following steps:
s1, autonomously mapping the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code/color block label relative to the camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
S2, carrying out real-time mapping on the sub-region of the map to be mapped, carrying out loop optimization on real-time sub-map which is created by real-time point cloud data acquired by the sub-region of the map to be mapped and point cloud data based on acquisition of preset frames, and the reference point cloud data and the reference sub-map, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-region of the map to be mapped is optimized until the mapping of the sub-region of the map to be mapped is completed to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
and S3, splicing the reference map and the sub-region map to obtain a complete map of the region to be mapped.
As an improvement of the above scheme, when the reference area/sub-area to be mapped is autonomously mapped, the pose of each two-dimensional code/color block label relative to the camera is calculated by the following steps:
after the two-dimensional code/color block label is shot and identified by a camera, calculating a rotation matrix and a translation matrix by four corner points of the two-dimensional code/color block label and calling a solvePnP function of OpenCV;
let p be o Let p be the coordinate (000) of the two-dimensional code/color block label center point under the two-dimensional code/color block label coordinate system c For the coordinates of the camera center point under the two-dimensional code/color block label coordinate system, p is o And p c The relative relationship of (2) is shown in formula 1:
p o =R 1 *p c +t 1 equation 1
Wherein R is 1 For rotating matrix t 1 Is a translation matrix;
setting the transformation matrix asThen p is O =T 1 *p C Equation 1'
Let p' c Is the coordinate of the center point of the camera in the camera coordinate system, p' o The coordinates of the two-dimensional code/color block label center point in the camera coordinate system can be obtained by the formula 1':
and obtaining the pose of each two-dimensional code/color block label relative to the camera according to the formula 2.
As an improvement of the above scheme, when the reference area/sub-area to be created is automatically created, the coordinates of each two-dimensional code/color block label on the reference map/sub-area map and the virtual wall grid corresponding to the marks are obtained through the following steps:
Let T be 2 Is camera p' C To the robot center point p in the robot base coordinate system R Then p' C =T 2 *p R Equation 3, the two-dimensional code/color block label center point can be obtained in the robot base coordinate system p according to equation 3 and equation 2 R The lower coordinates are
Let T be 3 Is robot p R Robot center point p under map coordinate system during map construction W A lower transformation matrix, p R =T 3 *p W Equation 5; from equation 4 and equation 5
And obtaining the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map according to a formula 6, and obtaining a corresponding virtual wall grid according to the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map and the length and the width of each two-dimensional code/color block label.
As an improvement of the above-described scheme, the two-dimensional code/color block tag is identified by: pre-storing each two-dimensional code/color block label image arranged on the public area, and performing image feature matching on the image shot in real time by the camera and each pre-stored two-dimensional code/color block label image so as to identify the two-dimensional code/color block label; wherein, each two-dimensional code/color block label image arranged on the public area is the same, the same or different.
As an improvement of the above solution, in the step S2, the loop optimization specifically includes:
s21, real-time point cloud data acquired by the map to be built subarea are matched with the reference subarea to carry out loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
As an improvement of the above solution, in the step S2, the loop optimization specifically includes:
s21, matching the real-time sub-map of the map sub-region to be built with the datum point cloud data to perform loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
As an improvement of the above scheme, the robot is a single robot, and the single robot builds the map of the reference area and the map to be built sub-area through different time periods.
On the other hand, the embodiment of the invention provides an autonomous drawing building device of a robot, which is suitable for autonomous drawing building of a drawing area to be built by the robot, wherein the drawing area to be built comprises a reference area and at least one drawing subarea to be built, and the drawing subarea to be built and the reference area have a common area; a plurality of two-dimensional codes or color block labels are arranged on the public area, the length and the width of each two-dimensional code or color block label are known values, and the distance between two adjacent two-dimensional codes or color block labels is smaller than the width of the robot body; the robot autonomous mapping device comprises:
the reference map construction module is used for autonomously constructing the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code relative to a camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The sub-region map construction module is used for carrying out real-time map construction on the sub-region to be constructed, carrying out loop optimization on real-time sub-map which is created by real-time point cloud data acquired by the sub-region to be constructed and point cloud data based on acquisition of preset frames, the reference point cloud data and the reference sub-map, and thus obtaining the pose of all history sub-maps and the pose of all point cloud data after the sub-region to be constructed is optimized until the map construction of the sub-region to be constructed is completed so as to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
and the complete map generation module is used for splicing the reference map and the sub-region map to obtain the complete map of the region to be built.
On the other hand, the embodiment of the invention provides an autonomous drawing system of a robot, which is suitable for autonomous drawing of a drawing area to be built, wherein the drawing area to be built comprises a reference area and at least one drawing subarea to be built, and the drawing subarea to be built and the reference area have a common area; a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of the robot body; the laser mapping system comprises a single robot and a cloud/server, wherein the single robot has independent mark IDs for data acquired by different areas; wherein:
the single robot is used for autonomously building a map of the reference area in a first period, sending real-time point cloud data acquired by the reference area and a real-time sub-map created based on the point cloud data acquired by a preset frame to a cloud/server, and sending an image acquired by a camera in real time to the cloud/server;
the cloud end/server end carries out autonomous mapping and optimization based on real-time point cloud data and real-time sub-maps acquired by the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; the cloud end/server end identifies two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the reference map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The single robot is further configured to perform real-time mapping on the sub-area to be mapped in other different periods outside the first period, send real-time point cloud data acquired by the sub-area to be mapped and a real-time sub-map created based on the point cloud data acquired by the acquisition of a preset frame to the cloud/server, and send an image acquired by a camera in real time to the cloud/server;
the cloud end or the server end is used for carrying out loop optimization on the received real-time point cloud data and real-time sub-map acquired by the sub-region of the map to be built, the reference point cloud data and the reference sub-map, so that the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-region of the map to be built is obtained and correspondingly returned to the single robot until the single robot completes the map building of the sub-region of the map to be built to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps; the cloud end/server end identifies two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the subarea map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The cloud end/server end is further used for splicing the reference map and the sub-region map to obtain a complete map of the region to be built, and modifying all data IDs in the complete map into the same ID.
In yet another aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory is configured to store a computer program, the computer program including program instructions, and the processor is configured to invoke the program instructions to perform the robot autonomous mapping method according to any of the embodiments above.
In yet another aspect, an embodiment of the present invention provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program including program instructions, which when executed by a processor, cause the processor to perform the robot autonomous mapping method according to any of the embodiments above.
Compared with the prior art, the method provided by the embodiment of the invention has the following technical effects: (1) The robot is used for carrying out regional map construction on the map region to be constructed, so that the working efficiency of environment map construction under a large scene can be effectively improved, time sharing/construction map construction can be realized, and the risk that the map construction failure needs to be reconstructed due to interruption in the map construction process of the large scene is reduced. (2) And carrying out autonomous mapping on the reference area by the robot to obtain a reference map as an optimization reference, carrying out loop optimization on real-time point cloud data acquired by carrying out real-time mapping on other sub-areas to be mapped by the robot, and carrying out loop optimization on the created real-time sub-map, the reference point cloud data in the reference map and the reference sub-map, so that maps between other areas and the reference area can be effectively associated, the accumulated error of the map data between other areas and the reference area is effectively eliminated, the final map fusion is facilitated, and the accuracy of the integrated complete map is improved. (3) A virtual wall is constructed in a public area between a sub-area of a to-be-constructed map and a reference area, a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, the distance between two adjacent two-dimensional code/color block labels is smaller than the width of a robot body, in the map construction process of the reference area and the sub-area of the to-be-constructed map, the pose of each two-dimensional code/color block label relative to the camera is acquired and calculated through a camera arranged on the robot, the position of each two-dimensional code/color block label relative to the camera is calculated according to the pose of each two-dimensional code/color block label, the coordinates of each two-dimensional code/color block label on the map of the reference map/sub-area and the virtual wall grids corresponding to the marks are sequentially connected on the map of the reference map/sub-area to form a complete virtual wall on the public area, and accordingly the problems of inconvenience in physical barrier construction of each area and large physical barrier requirement caused by physical boundaries can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a robot autonomous mapping method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a robot autonomous mapping apparatus according to an embodiment of the present invention.
Fig. 3 is a flowchart of a specific embodiment of a robot autonomous mapping method provided by the present invention.
Fig. 4a is a schematic diagram of a map constructed by using the robot autonomous map construction method according to the preferred embodiment of the present invention.
Fig. 4b is a schematic diagram of a robot autonomous mapping method according to a preferred embodiment of the present invention for setting virtual walls in a region to be mapped.
Fig. 4c is a schematic diagram of a grid map of virtual walls shown in fig. 4 b.
Fig. 4d is a schematic diagram of another embodiment of a robot autonomous mapping method according to the present invention for setting virtual walls in a region to be mapped.
Fig. 4e is a schematic diagram of a grid map of virtual walls shown in fig. 4 d.
Fig. 5 is a schematic diagram of a map constructed by using the robot autonomous map construction method according to another preferred embodiment of the present invention.
Fig. 6 is a schematic diagram of a map constructed by using the robot autonomous map construction method according to another preferred embodiment of the present invention.
Fig. 7 is a schematic diagram of a map constructed by using the robot autonomous map construction method according to another preferred embodiment of the present invention.
Fig. 8 is a block diagram of a robot autonomous mapping system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of the present invention provides a robot autonomous mapping method, which is suitable for autonomous mapping of a region to be mapped by a robot, where the region to be mapped includes a reference region and at least one region to be mapped, and the region to be mapped and the reference region have a common region; a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of the robot body; the autonomous robot mapping method comprises the following steps:
S1, autonomously mapping the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code/color block label relative to the camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
s2, carrying out real-time mapping on the sub-region of the map to be mapped, carrying out loop optimization on real-time sub-map which is created by real-time point cloud data acquired by the sub-region of the map to be mapped and point cloud data based on acquisition of preset frames, and the reference point cloud data and the reference sub-map, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-region of the map to be mapped is optimized until the mapping of the sub-region of the map to be mapped is completed to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
And S3, splicing the reference map and the sub-region map to obtain a complete map of the region to be mapped.
In this embodiment, the robot is a cleaning robot, and the pose includes XY axis coordinates and an orientation θ in a two-dimensional space.
It can be appreciated that in this embodiment, a virtual wall is constructed in a public area between a sub-area of a map to be constructed and a reference area, specifically, a plurality of two-dimensional code/color block labels are arranged in the public area, the length and the width of each two-dimensional code/color block label are known values, the distance between two adjacent two-dimensional code/color block labels is smaller than the width of a robot body, in the map construction process of the reference area and the sub-area of the map to be constructed, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated by a camera arranged on the robot, and the coordinates of each two-dimensional code/color block label on a reference map/sub-area map and virtual wall grids corresponding to marks are calculated according to the pose of each two-dimensional code/color block label relative to the camera, so that the marked virtual wall grids are sequentially connected on the reference map/sub-area map to form a complete virtual wall on the public area, thereby effectively solving the problems of inconvenience in placement and large physical barrier demarcation quantity caused by using physical barriers to construct the entities of each area.
In one embodiment, as shown in fig. 4b, a row (a group) of two-dimensional code/color block labels are arranged on a common area, and the distance between two adjacent two-dimensional code/color block labels is smaller than a preset distance value h m Wherein, the preset interval value h m Is smaller than the body width of the robot. In this way, the spacing between adjacent virtual wall grids to be marked on the reference map/sub-area map is less than a predetermined spacing value h m The virtual wall grids of (2) are sequentially connected, and the complete virtual wall in the finally generated grid map is a black line, as shown in fig. 4 c.
In another embodiment, as shown in fig. 4d, at least two rows (two groups) of two-dimensional code/color block labels are arranged on the common area, for example, fig. 4d shows three rows (three groups) of two-dimensional code/color block labels, and the spacing between two adjacent two-dimensional code/color block labels in the same row (same group) of two-dimensional code/color block labelsIs smaller than a preset interval value h m While the distance between two adjacent two-dimensional code/color block labels of different rows (different groups) is larger than a preset distance value h m And is smaller than the width of the robot body, namely the distance between adjacent rows of two-dimensional codes/color block labels is larger than a preset distance value h m And smaller than the body width of the robot, thereby avoiding the robot from passing from one area to another through the common area. In this way, the spacing between adjacent virtual wall grids to be marked on the reference map/sub-area map is less than a predetermined spacing value h m The virtual wall grids of (2) are sequentially connected, and the complete virtual wall in the finally generated grid map is a plurality of black lines, as shown in fig. 4 e.
It can be understood that when the sub-area to be built includes a plurality of sub-areas, a plurality of two-dimensional code/color block labels can be arranged between the junctions of the sub-areas to be built by the above construction method of the complete virtual wall.
Specifically, when the reference area/the sub-area to be created is automatically created, the pose of each two-dimensional code/color block label relative to the camera is calculated through the following steps:
after the two-dimensional code/color block label is shot and identified by a camera, calculating a rotation matrix and a translation matrix by four corner points of the two-dimensional code/color block label and calling a solvePnP function of OpenCV;
let p be o Let p be the coordinate (000) of the two-dimensional code/color block label center point under the two-dimensional code/color block label coordinate system c For the coordinates of the camera center point under the two-dimensional code/color block label coordinate system, p is o And p c The relative relationship of (2) is shown in formula 1:
p o =R 1 *p c +t 1 equation 1
Wherein R is 1 For rotating matrix t 1 Is a translation matrix;
setting the transformation matrix asThen p is O =T 1 *p C Equation 1'
Let p' c Is the coordinate of the center point of the camera in the camera coordinate system, p' o The coordinates of the two-dimensional code/color block label center point in the camera coordinate system can be obtained by the formula 1':
and obtaining the pose of each two-dimensional code/color block label relative to the camera according to the formula 2.
After the pose of each two-dimensional code/color block label relative to the camera is obtained, further, calculating to obtain a virtual wall grid corresponding to the coordinates and marks of each two-dimensional code/color block label on the reference map/sub-area map through the following steps:
let T be 2 Is camera p' C To the robot center point p in the robot base coordinate system R Then p' C =T 2 *p R Equation 3, the two-dimensional code/color block label center point can be obtained in the robot base coordinate system p according to equation 3 and equation 2 R The lower coordinates are
Let T be 3 Is robot p R Robot center point p under map coordinate system during map construction W A lower transformation matrix, p R =T 3 *p W Equation 5; from equation 4 and equation 5
And obtaining the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map according to a formula 6, and obtaining a corresponding virtual wall grid according to the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map and the length and the width of each two-dimensional code/color block label.
In this embodiment, the length and width of each two-dimensional code/color block label are known values (for example, 80mm x 80mm, or may be rectangular).
Specifically, the two-dimensional code/color block label is identified by: pre-storing each two-dimensional code/color block label image arranged on the public area, and performing image feature matching on the image shot in real time by the camera and each pre-stored two-dimensional code/color block label image so as to identify the two-dimensional code/color block label; wherein, each two-dimensional code/color block label image arranged on the public area is the same, the same or different. It can be understood that the more the number of each two-dimensional code/color block label image arranged on the common area is different, the higher the accuracy of identification is.
It will be appreciated that the process of calculating the rotation matrix and the translation matrix by calling the OpenCV sovepnp function is well known to those skilled in the art, and will not be described in detail herein.
In addition, in this embodiment, the two-dimensional code/color block label coordinate system uses the two-dimensional code center point as the origin of the two-dimensional code coordinate system, and is a right-hand coordinate system with the Z axis perpendicular to the plane inward. The map coordinate system is a world coordinate system of a robot base map, and is a world coordinate system constructed by taking a robot chassis as a reference plane, the laser equipment adopted in the map construction of the embodiment is arranged on the robot chassis, and the pose of a camera relative to the central point of the robot is also known (namely, the transformation matrix T2 is known), so that the pose of the two-dimensional code relative to the robot can be obtained; the pose of the robot in the map is also known in the map construction process (namely, the transformation matrix T3 is known), so that the coordinates of the two-dimensional code in the map can be obtained.
Alternatively, in the step S2, the loop optimization may be performed in a manner including:
s21, matching real-time point cloud data acquired by the map to be built subarea with the reference subarea to perform loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
Optionally, in the step S2, the loop optimization may also be performed in a second manner including the following steps: comprising the following steps:
s21, matching the real-time sub-map of the map sub-region to be built with the datum point cloud data to perform loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
As an improvement of the above scheme, the to-be-constructed map area is a first to-be-constructed map area, and the robot autonomous map construction method is further adapted to autonomously construct an nth to-be-constructed map area, where the nth to-be-constructed map area includes an nth-1 to-be-constructed map area and at least one nth to-be-constructed map sub-area, where the nth to-be-constructed map sub-area and the nth-1 to-be-constructed map area have a common area; wherein N is an integer greater than or equal to 2; the autonomous robot mapping method further comprises the following steps:
taking the obtained complete map of the N-1 to-be-built map area as a new reference map, wherein the N-1 to-be-built map area forms a new reference area;
carrying out real-time map building on the N sub-area to be map built, carrying out loop optimization on real-time sub-map which is built by real-time point cloud data acquired by the N sub-area to be map built and point cloud data based on acquisition of preset frames, and the reference point cloud data and the reference sub-map in the new reference map, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the N sub-area to be map built is optimized until the map building of the N sub-area to be map built is completed to generate an N sub-area map;
And splicing the new reference map and the Nth sub-region map to obtain a complete map of the Nth region to be built.
As an improvement of the above solution, in the step S2, the loop optimization is a first loop optimization, and before completing the drawing of the sub-area to be drawn and generating the sub-area map, the method further includes:
s201, performing secondary loop optimization on all the historical sub-maps and all the point cloud data of the sub-regions to be built, the datum point cloud data and the datum sub-map, and accordingly obtaining the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-regions to be built are secondarily optimized;
s202, generating the sub-area map based on the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-area of the map to be built is secondarily optimized.
Alternatively, in the step S201, the second loop optimization may be performed in a manner including:
s2011, matching all point cloud data of the sub-area of the map to be built with the reference sub-map to perform loop detection;
s2012, when the matching degree is greater than a preset threshold, judging that loop-back exists and executing S2013; otherwise, returning to the step S2011;
S2013, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
Optionally, in the step S201, the second loop optimization may also be performed in a second manner including the following steps:
s2011, matching all the historical sub-maps of the map sub-region to be built with the datum point cloud data to perform loop detection;
s2012, when the matching degree is greater than a preset threshold, judging that loop-back exists and executing S2013; otherwise, returning to the step S2011;
s2013, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
It can be understood that, in this embodiment, the state of the sub-map includes that the creation is in progress and the creation is completed, the number of frames of the point cloud data in the sub-map in progress is less than the preset frame, and the number of frames of the point cloud data in the sub-map in completion is equal to the preset frame; the history sub-map is a sub-map which is created. In addition, in the implementation, virtual walls are arranged in the to-be-built image area, so that the to-be-built image area is divided into the reference area and at least one to-be-built image sub-area, and public areas of each to-be-built image sub-area and the reference area are as many as possible.
Referring to fig. 2, an embodiment of the present invention provides a robot autonomous mapping device, which is adapted to perform autonomous mapping on a to-be-mapped area by a robot, where the to-be-mapped area includes a reference area and at least one to-be-mapped sub-area, and the to-be-mapped sub-area and the reference area have a common area; a plurality of two-dimensional codes or color block labels are arranged on the public area, the length and the width of each two-dimensional code or color block label are known values, and the distance between two adjacent two-dimensional codes or color block labels is smaller than the width of the robot body; the robot autonomous mapping device comprises:
the reference map construction module 901 is configured to autonomously construct a map of the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code relative to a camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area; the sub-region map construction module 902 is configured to perform real-time map construction on the sub-region to be constructed, perform loop optimization on real-time sub-map, the reference point cloud data and the reference sub-map, which are created by real-time point cloud data acquired by the sub-region to be constructed and point cloud data based on acquisition of a preset frame, so as to obtain pose of all history sub-maps and pose of all point cloud data after the sub-region to be constructed is optimized, until the construction of the sub-region to be constructed is completed, so as to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
And the complete map generating module 903 is configured to splice the reference map and the sub-area map to obtain a complete map of the area to be mapped.
It can be understood that the robot autonomous mapping device provided by the embodiment of the present invention corresponds to the robot autonomous mapping method based on the mobile robot disclosed in the foregoing embodiment, and the specific working process may refer to the corresponding content of the robot autonomous mapping method described in the foregoing, so that redundant description is omitted herein.
The embodiment of the invention also provides electronic equipment, which comprises a processor and a memory, wherein the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions to execute the robot autonomous mapping method according to any embodiment.
An embodiment of the present invention further provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program including program instructions, which when executed by a processor, cause the processor to perform the robot autonomous mapping method according to any of the embodiments above.
In the following, detailed implementation procedures of the robot autonomous mapping method will be further described through specific embodiments.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Referring to fig. 3, an embodiment of the present invention provides a robot autonomous mapping method, which is suitable for autonomous mapping of a region to be mapped by a single robot, where the region to be mapped includes a reference region and at least one region to be mapped, and the region to be mapped and the reference region have a common region; the to-be-constructed map area comprises a reference area and at least one to-be-constructed map sub-area, wherein a public area exists between the to-be-constructed map sub-area and the reference area, a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of the robot body; the single robot performs autonomous mapping on the reference area and the sub-area to be mapped through different time periods, and the robot autonomous mapping method comprises the following steps of S1-S3:
s1, autonomously mapping the reference area in a first period by the single robot to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code/color block label relative to the camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
S2, carrying out real-time mapping on the sub-area to be mapped through other different time periods outside the first time period by the single robot, carrying out loop optimization on real-time sub-maps acquired by the sub-area to be mapped, real-time sub-maps created based on the acquired point cloud data of the preset frame, the datum point cloud data and the datum sub-maps, and thus obtaining the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-area to be mapped is optimized until mapping of the sub-area to be mapped is completed to generate a sub-area map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
And S3, splicing the reference map and the sub-region map to obtain a complete map of the region to be mapped.
In a preferred embodiment, the data processing involved in the autonomous mapping method of the robot shown in fig. 3 is performed in a local computing unit of the mobile single robot;
the step S1 shown in fig. 3 specifically includes:
s11, receiving real-time scanning data of a reference area obtained by scanning laser radar equipment of a single robot in a first period;
s12, converting the received real-time scanning data of the reference area into real-time point cloud data to be inserted into a corresponding position of the sub-map in process of creation, and obtaining the real-time sub-map after the creation is completed when the point cloud data of the preset frame is reached;
s13, carrying out loop optimization on the basis of the real-time point cloud data and the real-time sub-map of the reference area, all the historical sub-maps and all the point cloud data of the reference area, and obtaining the pose of all the historical sub-maps and the pose of all the point cloud data after the optimization of the reference area until the construction of the reference area is completed, and obtaining the reference map based on all the optimized historical sub-maps;
the step S2 shown in fig. 3 specifically includes:
s21, receiving real-time scanning data of a sub-region to be constructed, which is obtained by scanning laser radar equipment of a single robot, in other different periods outside the first period;
S22, converting the received real-time scanning data of the sub-area to be built into real-time point cloud data to be inserted into the corresponding position of the sub-map in process of creation, and obtaining the real-time sub-map after the creation is completed when the point cloud data of the preset frame is reached;
s23, carrying out loop optimization on the real-time point cloud data and the real-time sub-map of the sub-region to be built, the reference point cloud data and the reference sub-map, so as to obtain the pose of all the history sub-maps and the pose of all the point cloud data after the sub-region to be built is optimized, and obtaining the reference map based on all the optimized history sub-maps to generate the sub-region map after the sub-region to be built is built.
In another preferred embodiment, the data processing related to the autonomous mapping method of the robot is completed together with a cloud or a server side in a local computing unit of the mobile single robot;
the step S1 shown in fig. 3 specifically includes:
s11, receiving real-time scanning data of a reference area obtained by scanning laser radar equipment of a single robot in a first period;
s12, converting the received real-time scanning data of the reference area into real-time point cloud data to be inserted into a corresponding position of the sub-map in process of creation, and obtaining the real-time sub-map after the creation is completed when the point cloud data of the preset frame is reached;
S13, real-time point cloud data and real-time sub-maps of the reference area are sent to a cloud or a server in real time, so that the cloud or the server performs loop optimization on the basis of the real-time point cloud data and the real-time sub-maps of the reference area and all the historical sub-maps and all the point cloud data of the reference area to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the optimization of the reference area until the construction of the reference area is completed, and then the reference map is obtained on the basis of all the optimized historical sub-maps;
the step S2 shown in fig. 3 specifically includes:
s21, receiving real-time scanning data of a sub-region to be constructed, which is obtained by scanning laser radar equipment of a single robot, in other different periods outside the first period;
s22, converting the received real-time scanning data of the sub-area to be built into real-time point cloud data to be inserted into the corresponding position of the sub-map in process of creation, and obtaining the real-time sub-map after the creation is completed when the point cloud data of the preset frame is reached;
s23, real-time point cloud data and real-time sub-maps of the reference area are sent to a cloud or a server in real time, so that the cloud or the server performs loop optimization on the basis of the real-time point cloud data and the real-time sub-maps of the sub-areas to be built, the reference point cloud data and the reference sub-maps, and therefore pose of all history sub-maps and pose of all point cloud data after the sub-areas to be built are optimized are obtained, and a reference map generation sub-area map is obtained on the basis of all history sub-maps after the sub-areas to be built are built;
Step S3 shown in fig. 3 is completed by the cloud end or the server end.
In the foregoing embodiment, optionally, the sub-area to be mapped includes one sub-area to be mapped, and in step S2 shown in fig. 3, real-time mapping is performed on the sub-area to be mapped in a second period of time; the single robot has different mark IDs of data acquired by the reference area and the sub-area to be constructed in different periods; or, the mark IDs of the data acquired by the single robot for the reference area and the sub-area to be constructed in different time periods are the same, and all the data IDs of the reference map after being generated are changed into other different mark IDs.
The robot autonomous mapping method provided in the above embodiments is described in further detail below with reference to fig. 4 a.
For convenience of description, the description will be made in the case where the map to be built area 100 includes the reference area 10 and one map to be built sub-area 20. Namely, the region 100 (scene) to be built is divided into the reference region 10 and the sub-region 20 to be built, and the reference region 10 and the sub-region 20 to be built have a common region 30 (i.e., an overlapping region). Map block 1 shown in fig. 4a corresponds to a reference map generated by the single robot 40 after mapping the reference area 10 in the period of one pair, and map block 2 shown in fig. 4a corresponds to a sub-area map generated by the single robot 40 after mapping the sub-area 20 to be mapped in the period of two pairs. It will be appreciated that both the reference map and the sub-region map are grid maps in which the complete virtual wall built on the common region 30 is black (representing the robot inactive/pass-through region) and the white region in the grid map is the robot active/pass-through region, but this distinction is not shown in fig. 4a for clarity of description of the sub-map construction.
It can be appreciated that, in implementation, the reference area 10 and the sub-area 20 to be created may be divided by setting a virtual wall, and the virtual wall (such as a color block label, a two-dimensional code, etc.) is constructed on the boundary line between the reference area 10 and the sub-area 20 to be created to define the area where the single robot 40 operates at different periods, as shown in fig. 4 b. Please refer to the above related description for the construction process of the virtual wall, and the detailed description is omitted herein.
In the embodiment (hereinafter, collectively referred to as a local computing scheme) executed by the local computing unit of the mobile single robot 40, the autonomous mapping method of the robot refers to data processing, where the single robot 40 is a SLAM robot, and has a sufficiently strong computing capability, and can execute the whole SLAM process, including processing of point cloud data, creation and optimization of sub-maps, generation of a reference map, and fusion of maps. In the embodiment (hereinafter referred to as a cloud computing scheme) that the local computing unit and the cloud or the server of the mobile single robot are completed together, the single robot 40 only needs to use a SLAM single robot with general computing capability, so that the processing of point cloud data and the creation of sub-maps can be performed, operations such as the optimization of the point cloud data and the sub-maps, the generation of a reference map and the fusion of the maps can be performed through the cloud or the server, and the single robot 40 only takes charge of the collection of the point cloud data and the creation of the real-time sub-maps of the corresponding area and the real-time point cloud data and the real-time sub-maps of the sub-areas to be built to the cloud or the server for optimization.
Specifically, in steps S11 and S12 (including the local computing scheme and the cloud computing scheme), the reference area is first autonomously mapped in the first period by the single robot 40 to obtain the reference map, during the mapping process of moving the single robot 40, the real-time scan data obtained by scanning the single robot 40 by the laser radar device is sent to the local computing unit (set inside the single robot 40), the real-time scan data of the reference area is converted into real-time point cloud data by the local computing unit to be inserted (matched) into the corresponding (optimal) position of the sub-map under creation, and when the point cloud data of the preset frame number (for example, 50 frames) is reached, the created real-time sub-map (such as sub-map 1-1, sub-map 1-2, sub-map 1-3 in map block 1 shown in fig. 4 a) is obtained.
Next, in step S13 (including a local computing scheme and a cloud computing scheme), the real-time point cloud data and the real-time sub-map of the reference area 10 and all the historical sub-maps and all the point cloud data of the reference area 10 are subjected to loop optimization by the local computing unit of the single robot 40 or the cloud/server side, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the optimization of the reference area. It can be appreciated that, since the map obtained by autonomous mapping of the reference area 10 by the single robot 40 is used as a reference map (reference), in order to ensure the accuracy of the data of the reference map itself, loop optimization is required to be performed on all the historical sub-maps and all the point cloud data of the area to eliminate the accumulated error.
Specifically, loop optimization in step S13 may be performed in two ways: performing loop detection on the currently added real-time point cloud data and all the historical sub-maps of the reference area 10;
mode two: and carrying out loop detection on the real-time sub-map which is currently created and all point cloud data of the reference area 10.
In a specific implementation, taking an example as a first case, the current real-time point cloud data is matched with all the historical sub-maps of the reference area 10, when the matching degree between the current real-time point cloud data and any one of the all the historical sub-maps is higher than a set threshold (which can be set in advance), a loop is determined to be formed (for example, refer to an overlapping portion between the sub-maps in the map block 1 shown in fig. 4 a), after the loop is determined to be formed, an optimization process (for example, a nonlinear least square optimization process) is performed on all the historical sub-maps and all the point cloud data of the reference area 10 to correct the pose of all the historical sub-maps and all the point cloud data of the reference area 10, that is, correct the X-axis Y-axis coordinates and the orientation θ of all the historical sub-maps of the reference area 10 in the map coordinate system and the X-axis Y-axis coordinates and the orientation θ of all the point cloud data in the map coordinate system.
Further, in this step S13, after the construction of the reference area 10 is completed (for example, the reference map is detected to be closed, that is, the full pose data is detected), the reference map is obtained by the local computing unit of the single robot 40 or the cloud/service terminal and all the optimized historical sub-maps (specifically, by fusion means such as stitching).
It can be appreciated that in steps S21 to S22 (including the local computing scheme and the cloud computing scheme), the process of performing real-time mapping on the sub-area 20 to be mapped by using the single robot 40 in different time periods outside the first time period to obtain real-time point cloud data and real-time sub-map is substantially identical to the process of performing real-time mapping on the reference area by using the single robot 40 in the first time period to obtain real-time point cloud data and real-time sub-map, which is not repeated herein.
Unlike the stand-alone robot 40 performing real-time mapping on the reference area 10 in the first period, the stand-alone robot 40 performs real-time mapping on the sub-area 20 to be mapped in other different periods outside the first period to obtain real-time point cloud data and real-time sub-maps, and performs loop optimization on the real-time point cloud data and real-time sub-maps of the sub-area 20 to be mapped and the reference point cloud data and the reference sub-maps in the constructed reference map instead of performing loop optimization on all the historical sub-maps and all the point cloud data of the sub-area 20 to be mapped.
Specifically, in step S23 (including the local computing scheme and the cloud computing scheme), the loop optimization may be performed in the following two manners:
mode one: carrying out loop detection on the currently received real-time point cloud data of the sub-area of the map to be built and all reference sub-maps of the reference map;
mode two: and carrying out loop detection on the currently received real-time sub-map of the map sub-region to be built and all datum point cloud data of the datum map.
The specific implementation process of the two manners of loop optimization in this step may refer to the foregoing related description, and the detailed description is omitted here.
It will be appreciated that, in order to distinguish the data collected by the single robot 40 in the different areas of different time periods, the data collected by the single robot 40 in the different areas of different time periods are provided with independent marks ID, for example, all the data collected by the single robot 40 in the first time period for the reference area 10 are provided with marks ID1, and all the data collected by the single robot 40 in the second time period for the sub-area 20 to be mapped are provided with marks ID2.
Further, in step S23, after the construction of the sub-area of the map to be constructed is completed (for example, the sub-area map is detected to be closed, that is, the whole pose data is detected), the sub-area map is obtained based on all the history sub-maps (specifically, by fusion means such as splicing) after the sub-area 20 to be constructed is optimized.
Then, in step S3 shown in fig. 3, the reference map and the sub-area map are spliced by the local computing unit or the cloud/server of the mobile single robot 40 to obtain a complete map of the area to be mapped, and all the data IDs in the complete map are modified to the same ID, for example, may be all modified to ID3.
It can be seen that, in this embodiment, in addition to adopting self-loop optimization (using all the historical sub-maps or all the point cloud data of the self-loop optimization) in the process of constructing the reference region 10, instead of adopting self-loop optimization in the process of constructing the sub-region 20 of the map to be constructed, the reference map (using all the reference sub-maps or all the reference point cloud data) constructed in the reference region is adopted as a reference (reference) to perform loop optimization on all the historical sub-maps and all the point cloud data of the sub-region 20 of the map to be constructed, so that the map between the sub-region 20 of the map to be constructed and the reference region 10 can be effectively associated, the accumulated error of the map data between the sub-region 20 of the map to be constructed and the reference region 10 can be effectively reduced/eliminated, thereby facilitating the final map fusion, and improving the accuracy of the integrated map.
Specifically, with continued reference to fig. 4a, the sub-maps 1-10, 1-11, and 1-12 included in the map block 1 (corresponding to the reference map of the reference area 10) and the sub-maps 2-10, 2-11, and 2-12 included in the map block 2 (corresponding to the sub-area reference map of the sub-area 20 to be built) have overlapping portions on the common area, and the overlapping portions are used as the association points of the sub-area 20 to be built and the reference area 10 to optimize the data of the sub-area to be built, so as to reduce/eliminate the accumulated error of the map data between the sub-area 20 to be built and the reference area 10.
It can be understood that, when the common area 30 between the sub-area 20 to be built and the reference area 10 is larger, the accuracy of loop optimization for all the historical sub-maps and all the point cloud data of the sub-area 20 to be built by the reference map of the embodiment is higher, so that the accumulated error of the map data between the sub-area 20 to be built and the reference area 10 is better reduced. Therefore, virtual walls are arranged in the to-be-built area, so that the to-be-built area is divided into the reference area and at least one to-be-built sub-area, and the public areas of each to-be-built sub-area and the reference area are as many as possible.
For example, as shown in fig. 5, as an improvement of the scheme shown in fig. 4a, the region to be mapped 100 is divided as follows: the reference area 10 (map block 1) is located at the central position of the to-be-built map area 100, then the to-be-built map sub-area 20 (map block 2) surrounds the periphery of the reference area 10, and a common area 30 (a hatched portion in fig. 5) exists between the to-be-built map sub-area 20 and the reference area 10, so that by setting, a common (overlapping) area exists between the to-be-built map sub-area 20 and the periphery of the reference area 10, and the precision of loop optimization of all historical sub-maps and all point cloud data of the to-be-built map sub-area 20 by using the reference map can be effectively improved.
It should be understood that fig. 4a and fig. 5 show that the area to be built is divided into a reference area 10 and a sub-area 20 to be built, and the single robot 40 is correspondingly used to perform autonomous building on the reference area 10 and the sub-area 20 to be built respectively in different periods, but the present invention is not limited to the division of the area to be built into the reference area 10 and the sub-area 20 to be built, and the sub-area 20 to be built may include at least one embodiment, where only one case is shown in fig. 4a and fig. 5.
As shown in fig. 5, after the single robot 40 performs autonomous mapping on the pair of reference areas 10 in the period one to obtain the reference map, the single robot 40 performs autonomous mapping on the pair of sub-areas 20 to be mapped in the period two to obtain the sub-area map. In addition, it can be appreciated that in this case, in order to distinguish the data of the single robot 40 in the reference area 10 and the sub-area 20 to be mapped, the following means may be adopted:
means one: the data collected by the single robot 40 in the reference area 10 and the map sub-area 20 are different in ID, for example, the data collected by the single robot 40 in the reference area 10 is ID1, and the data collected by the single robot 40 in the reference area 10 is ID2 (the internal of the single robot 40 may be changed by setting in the specific implementation).
Means II: the single robot 40 changes all data IDs in the reference map in the same mark ID of the data collected in the reference area 10 and the map sub-area 20 to be constructed. For example, when the ID of the data collected by the single robot 40 in the period of one pair of reference areas 10 is ID1 and the reference map is finally obtained by moving the local computing unit or the cloud/server of the single robot 40, all the data IDs in the reference map can be changed from the original ID1 to ID2 by moving the local computing unit or the cloud/server of the single robot 40, and then the single robot 40 continues to consider the ID1 as the ID of the data collected by the sub-area 20 to be built in the period of two, so that the data IDs in the reference map can be distinguished from each other.
In the foregoing embodiment, optionally, the sub-area to be created includes at least two sub-areas; in step S2 shown in fig. 3, the single robot performs real-time mapping on different sub-areas to be mapped in different periods of time outside the first period of time; the mark IDs of the data acquired by the single robot for the reference area and the sub-area to be constructed in different time periods are different.
Specifically, referring to fig. 6, for example, the to-be-created map area 100 is divided into a reference area 10 and three to-be-created map sub-areas (including a first to-be-created map sub-area 21, a second to-be-created map sub-area 22, and a third to-be-created map sub-area 23), wherein a first common area 31 exists between the first to-be-created map sub-area 21 and the reference area 10, a second common area 32 exists between the second to-be-created map sub-area 22 and the reference area 10, and a third common area 33 exists between the third to-be-created map sub-area 23 and the reference area 10. Wherein, the single robot 40 is responsible for the mapping of the reference area 10 in the first period, the single robot 40 is responsible for the mapping of the first sub-area 21 to be mapped in the second period, the single robot 40 is responsible for the mapping of the second sub-area 22 to be mapped in the third period, and the single robot 40 is responsible for the mapping of the third sub-area 23 to be mapped in the fourth period. The map block 1 shown in fig. 6 corresponds to a reference map generated after the single robot 40 builds a map in the time slot of the pair of reference areas 10, the map block 2 shown in fig. 6 corresponds to a first sub-area map generated after the single robot 40 builds a map in the time slot of the second pair of first sub-areas 21 to be built, the map block 3 shown in fig. 6 corresponds to a second sub-area map generated after the single robot 40 builds a map in the time slot of the third pair of second sub-areas 22 to be built, and the map block 4 shown in fig. 6 corresponds to a third sub-area map generated after the single robot 40 builds a map in the time slot of the fourth pair of third sub-areas 23 to be built.
In this way, after the single robot 40 creates the map in the time interval pair of the reference areas 10 to generate the reference map, the single robot 40 can respectively create the map in real time in the time interval two, the time interval three and the time interval four in the responsible first map waiting sub-area 21, the second map waiting sub-area 22 and the third map waiting sub-area 23, and respectively send the acquired real-time point cloud data and the real-time sub-map to the local computing unit or the cloud or the server for loop optimization.
In addition, in order to distinguish the data acquired by the single robot 40 for the reference area 10 and the three sub-areas to be constructed (including the first sub-area to be constructed 21, the second sub-area to be constructed 22 and the third sub-area to be constructed 23) respectively in different periods, the data acquired by the single robot 40 for the different areas in different periods are provided with independent mark IDs, for example, all the data acquired by the single robot 40 for the reference area 10 in the period one are provided with mark IDs 1, all the data acquired by the single robot 40 for the first sub-area to be constructed 21 in the period two are provided with mark IDs 2, all the data acquired by the single robot 40 for the second sub-area to be constructed 22 in the period three are provided with mark IDs 3, and all the data acquired by the single robot 40 for the third sub-area to be constructed 23 in the period four are provided with mark IDs 4. In this way, the local computing unit or the cloud/server can perform loop optimization on data sent by the single robot in different periods based on different tag IDs.
In another preferred embodiment, the map to be built area is a first map to be built area, and the robot autonomous map building method is further adapted to autonomously build an nth map to be built area, where the nth map to be built area includes an nth-1 map to be built area and at least one nth map to be built sub-area, and the nth map to be built sub-area and the nth-1 map to be built area have a common area; wherein N is an integer greater than or equal to 2; the autonomous robot mapping method further comprises the following steps:
s4, taking the obtained complete map of the N-1 to-be-constructed map area as a new reference map, wherein the N-1 to-be-constructed map area forms a new reference area;
s5, carrying out real-time mapping on the N sub-area to be mapped through different time periods, carrying out loop optimization on real-time sub-maps which are acquired by the N sub-area to be mapped and are created based on the acquired point cloud data of a preset frame, and the reference point cloud data and the reference sub-maps in the new reference map, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the N sub-area to be mapped is optimized, until the single robot completes mapping of the N sub-area to be mapped to generate the N sub-area map;
And S6, splicing the new reference map and the Nth sub-area map to obtain a complete map of the Nth area to be built, and modifying all data IDs in the complete map of the Nth area to be built into the same ID.
The above-described preferred embodiment will be described in detail with reference to fig. 7.
For convenience of description, taking the embodiment shown in fig. 5 as an example, the above iterative scheme is performed on the basis of fig. 5, so as to obtain a map (n=2) shown in fig. 7.
Specifically, the complete map constructed based on the map to be constructed area 100 (in this embodiment, collectively referred to as the first map to be constructed area 100) shown in fig. 5 is used as a new reference map (corresponding to map block 1 in fig. 7), and correspondingly, the first map to be constructed area 100 becomes a new reference area 10', and the new reference area 10' is used as a reference (reference), so that autonomous map construction and optimization can be performed on the new map to be constructed area. In order to distinguish from the previous embodiment, the new to-be-created area in this embodiment is collectively referred to as a second to-be-created area 200 (in the case of n=2), the second to-be-created area 200 includes the new reference area 10' and the second to-be-created sub-area 20', and a common area 30' exists between the new reference area 10' and the second to-be-created sub-area 20 '. And placing the robots 40 in the second map sub-area 20 'at different periods, performing real-time map building on the second map sub-area 20' by using the single robot 40, enabling a local computing unit of the single robot 40 to complete the map building of the second map sub-area 20 'to generate a second sub-area (corresponding to map block 2 of fig. 7) by acquiring real-time point cloud data acquired by the second map sub-area 20' and point cloud data based on a preset frame number (for example, 50 frames) (if a cloud computing scheme is adopted, sending the real-time sub-map to a cloud or a server), performing loop optimization on the cloud or server and the reference point cloud data and the reference sub-map in the new reference map, thereby obtaining the pose of all the history sub-maps and the pose of all the point cloud data after the optimization of the second map sub-area 20', until the single robot 40 completes the map building of the second map sub-area 20', and finally enabling the local computing unit or the server of the single robot 40 to complete the second map sub-area to obtain the complete map 200 of the second map to be the same.
It can be understood that after the complete map of the second to-be-constructed area 200 shown in fig. 7 is obtained, the complete map of the second to-be-constructed area 200 may be used as a new reference map, the second to-be-constructed area 200 may be used as a new reference area, and the new reference area may be used as a reference (reference), so that the new to-be-constructed area (i.e., the third to-be-constructed area, n=3) may be autonomously constructed and optimized to obtain the complete map of the third to-be-constructed area. Then, the complete map of the third to-be-constructed area can be used as a new reference map, the third to-be-constructed area becomes a new reference area, the new reference area is used as a reference (reference), and autonomous construction and optimization can be performed on the new to-be-constructed area (i.e. the fourth to-be-constructed area, n=4) to obtain the complete map of the fourth to-be-constructed area. And (5) carrying out loop iteration to obtain the map of more areas.
It can be seen that, unlike the embodiment shown in fig. 6, by adopting the above iterative scheme (for example, fig. 7), the mapping of the to-be-mapped area is implemented in a time-division manner, and the obtained complete map is used as a new reference map to perform mapping of the next time-division after the mapping of the to-be-mapped area is completed in different time-division manner, the range of the reference area can be continuously enlarged, so that the association between other areas and the reference area can be effectively improved (the common area can be enlarged, that is, the length/width of each two-dimensional code/color block label can be increased), and the accumulated error of the map data between other areas and the reference area can be effectively eliminated, so that the final map fusion is facilitated, and the accuracy of the fused complete map is improved.
In addition, by adopting the iteration scheme (for example, fig. 7), the working environment of the system can be freely expanded at any time according to the actual task requirement (for example, cleaning requirement) to construct a corresponding map, so that the requirement of the working environment of a larger scene is met.
Further, as an improvement of the above solution, in step S3 shown in fig. 3, the loop optimization is a first loop optimization, and before completing the construction of the sub-area of the to-be-constructed image and generating the sub-area map, the loop optimization further includes a second loop optimization, where the second loop optimization may be performed in two manners, and a specific implementation process of the two manners of the second loop optimization may refer to the foregoing related description, and details are omitted herein.
Referring to fig. 8, an embodiment of the present invention provides a robot autonomous mapping system, which is suitable for autonomous mapping of a mapping area to be mapped, where the mapping area to be mapped includes a reference area and at least one mapping sub-area, and the mapping sub-area and the reference area have a common area; a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of the robot body; the laser mapping system comprises a single robot 40 and a cloud/server 60, wherein the single robot 40 has independent mark IDs for data collected in different areas; wherein:
The single robot 40 is configured to autonomously construct a map of the reference area in a first period, and send real-time sub-maps, which are created based on real-time point cloud data acquired by the reference area and point cloud data acquired by a preset frame, to the cloud/server 60; and sends the image acquired by the camera in real time to the cloud/server 60;
the cloud/server 60 performs autonomous mapping and optimization based on the real-time point cloud data and the real-time sub-map acquired by the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; the cloud end/server end 60 identifies and recognizes two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the reference map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The single-machine robot 40 is further configured to perform real-time mapping on the sub-area to be mapped in other different periods outside the first period, and send real-time point cloud data acquired by the sub-area to be mapped and a real-time sub-map created based on the point cloud data acquired by the acquisition of a preset frame to the cloud/server 60; and sends the image acquired by the camera in real time to the cloud/server 60;
the cloud end or the server end 60 is configured to perform loop optimization on the received real-time point cloud data and real-time sub-map acquired by the sub-region to be built, the reference point cloud data and the reference sub-map, so as to obtain the pose of all the history sub-maps and the pose of all the point cloud data after the sub-region to be built is optimized, and correspondingly return the pose to the single robot 40 until the single robot completes the construction of the sub-region to be built so as to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps; the cloud end/server end 60 identifies and recognizes two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the subarea map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The cloud/server 60 is further configured to splice the reference map and the sub-area map to obtain a complete map of the area to be mapped, and modify all data IDs in the complete map to the same ID.
It can be understood that, in the specific working process of the laser mapping system provided by the embodiment of the present invention, reference may be made to the corresponding content of the robot autonomous mapping method described above, and redundant description is omitted herein.
The foregoing disclosure is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, as it is understood by those skilled in the art that all or part of the above-described embodiments may be practiced without resorting to the equivalent thereof, which is intended to fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. The autonomous drawing construction method of the robot is characterized by being suitable for autonomous drawing construction of a drawing area to be constructed through the robot, wherein the drawing area to be constructed comprises a reference area and at least one drawing subarea to be constructed, the drawing subarea to be constructed and the reference area have a public area, a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of a machine body of the robot; the autonomous robot mapping method comprises the following steps:
S1, autonomously mapping the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code/color block label relative to the camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
s2, carrying out real-time mapping on the sub-region of the map to be mapped, carrying out loop optimization on real-time sub-map which is created by real-time point cloud data acquired by the sub-region of the map to be mapped and point cloud data based on acquisition of preset frames, and the reference point cloud data and the reference sub-map, so as to obtain the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-region of the map to be mapped is optimized until the mapping of the sub-region of the map to be mapped is completed to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
And S3, splicing the reference map and the sub-region map to obtain a complete map of the region to be mapped.
2. The autonomous mapping method of the robot according to claim 1, wherein when the autonomous mapping is performed on the reference area/sub-area to be mapped, the pose of each two-dimensional code/color block tag with respect to the camera is calculated by:
after the two-dimensional code/color block label is shot and identified by a camera, calculating a rotation matrix and a translation matrix by four corner points of the two-dimensional code/color block label and calling a solvePnP function of OpenCV;
let p be o Setting p for the coordinate (0 0 0) of the two-dimensional code/color block label center point under the two-dimensional code/color block label coordinate system c In a cameraThe coordinates of the core point under the two-dimensional code/color block label coordinate system are p o And p c The relative relationship of (2) is shown in formula 1:
p o =R 1 *p c +t 1 equation 1
Wherein R is 1 For rotating matrix t 1 Is a translation matrix;
setting the transformation matrix asThen p is O =T 1 *p C Equation 1'
Let p' c Is the coordinate of the center point of the camera in the camera coordinate system, p' o The coordinates of the two-dimensional code/color block label center point in the camera coordinate system can be obtained by the formula 1':
and obtaining the pose of each two-dimensional code/color block label relative to the camera according to the formula 2.
3. The autonomous mapping method of the robot according to claim 2, wherein in the real-time mapping of the reference area/the sub-area to be mapped, the coordinates and the marks of each two-dimensional code/color block label on the reference map/sub-area map are calculated by the following steps:
let T be 2 Is camera p' C To the robot center point p in the robot base coordinate system R Then p' C =T 2 *p R Equation 3, the two-dimensional code/color block label center point can be obtained in the robot base coordinate system p according to equation 3 and equation 2 R The lower coordinates are
Let T be 3 Is robot p R Robot center point P under map coordinate system during map construction W A lower transformation matrix, p R =T 3 *P W Equation 5; from equation 4 and equation 5
And obtaining the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map according to a formula 6, and obtaining a corresponding virtual wall grid according to the coordinates of the center point of each two-dimensional code/color block label on the reference map/sub-area map and the length and the width of each two-dimensional code/color block label.
4. The robot autonomous mapping method of claim 2, wherein the two-dimensional code/color block label is identified by: pre-storing each two-dimensional code/color block label image arranged on the public area, and performing image feature matching on the image shot in real time by the camera and each pre-stored two-dimensional code/color block label image so as to identify the two-dimensional code/color block label; wherein, each two-dimensional code/color block label image arranged on the public area is the same, the same or different.
5. The autonomous robot mapping method according to claim 1, wherein in the step S2, the loop optimization specifically includes a first mode or a second mode;
wherein, the first mode includes:
s21, real-time point cloud data acquired by the map to be built subarea are matched with the reference subarea to carry out loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built;
the second mode comprises:
s21, matching the real-time sub-map of the map sub-region to be built with the datum point cloud data to perform loop detection;
s22, when the matching degree is larger than a preset threshold, judging that loop-back exists and executing S23; otherwise, returning to the step S21;
s23, taking the datum point cloud data and the datum sub-map as references, and executing nonlinear least square optimization processing on all the historical sub-maps and all the point cloud data of the sub-region to be built so as to optimize the pose of all the historical sub-maps and all the point cloud data of the sub-region to be built.
6. The autonomous mapping method of a robot according to claim 1, wherein the robot is a single robot that maps the reference area and the sub-area to be mapped through different time periods.
7. The robot autonomous mapping device is characterized by being suitable for autonomous mapping of a mapping area to be mapped by a robot, wherein the mapping area to be mapped comprises a reference area and at least one mapping subarea, and the mapping subarea and the reference area have a common area; a plurality of two-dimensional codes or color block labels are arranged on the public area, the length and the width of each two-dimensional code or color block label are known values, and the distance between two adjacent two-dimensional codes or color block labels is smaller than the width of the robot body; the robot autonomous mapping device comprises:
the reference map construction module is used for autonomously constructing the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; in the autonomous building, acquiring and calculating the pose of each two-dimensional code relative to a camera through a camera arranged on a robot, calculating the coordinates of each two-dimensional code/color block label on the reference map and a virtual wall grid corresponding to the mark according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connecting the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The sub-region map construction module is used for carrying out real-time map construction on the sub-region to be constructed, carrying out loop optimization on real-time sub-map which is created by real-time point cloud data acquired by the sub-region to be constructed and point cloud data based on acquisition of preset frames, the reference point cloud data and the reference sub-map, and thus obtaining the pose of all history sub-maps and the pose of all point cloud data after the sub-region to be constructed is optimized until the map construction of the sub-region to be constructed is completed so as to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps, when the sub-area to be built is subjected to real-time image building, the pose of each two-dimensional code/color block label relative to the camera is obtained and calculated through a camera arranged on the robot, the coordinate of each two-dimensional code/color block label on the sub-area map and a corresponding virtual wall grid are calculated according to the pose of each two-dimensional code/color block label relative to the camera, and the marked virtual wall grids are sequentially connected on the sub-area map to form a complete virtual wall on the public area;
and the complete map generation module is used for splicing the reference map and the sub-region map to obtain the complete map of the region to be built.
8. The robot autonomous mapping system is characterized by being suitable for autonomous mapping of a mapping area to be mapped, wherein the mapping area to be mapped comprises a reference area and at least one mapping sub-area to be mapped, and the mapping sub-area to be mapped and the reference area have a common area; a plurality of two-dimensional code/color block labels are arranged on the public area, the length and the width of each two-dimensional code/color block label are known values, and the distance between two adjacent two-dimensional code/color block labels is smaller than the width of the robot body; the laser mapping system comprises a single robot and a cloud/server, wherein the single robot has independent mark IDs for data acquired by different areas; wherein:
the single robot is used for autonomously building a map of the reference area in a first period, sending real-time point cloud data acquired by the reference area and a real-time sub-map created based on the point cloud data acquired by a preset frame to a cloud/server, and sending an image acquired by a camera in real time to the cloud/server;
the cloud end/server end carries out autonomous mapping and optimization based on real-time point cloud data and real-time sub-maps acquired by the reference area to obtain a reference map; the reference map is a grid map formed by converting a plurality of reference sub-maps, and each reference sub-map comprises reference point cloud data of a preset frame; the cloud end/server end identifies two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the reference map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The single robot is further configured to perform real-time mapping on the sub-area to be mapped in other different periods outside the first period, send real-time point cloud data acquired by the sub-area to be mapped and a real-time sub-map created based on the point cloud data acquired by the acquisition of a preset frame to the cloud/server, and send an image acquired by a camera in real time to the cloud/server;
the cloud end or the server end is used for carrying out loop optimization on the received real-time point cloud data and real-time sub-map acquired by the sub-region of the map to be built, the reference point cloud data and the reference sub-map, so that the pose of all the historical sub-maps and the pose of all the point cloud data after the sub-region of the map to be built is obtained and correspondingly returned to the single robot until the single robot completes the map building of the sub-region of the map to be built to generate a sub-region map; the sub-area map is a grid map formed by converting a plurality of optimized sub-maps; the cloud end/server end identifies two-dimensional code/color block labels based on the received images, calculates the pose of each two-dimensional code/color block label relative to the camera, calculates the coordinates of each two-dimensional code/color block label on the subarea map and virtual wall grids corresponding to the marks according to the pose of each two-dimensional code/color block label relative to the camera, and sequentially connects the marked virtual wall grids on the reference map to form a complete virtual wall on the public area;
The cloud end/server end is further used for splicing the reference map and the sub-region map to obtain a complete map of the region to be built, and modifying all data IDs in the complete map into the same ID.
9. An electronic device comprising a processor, a memory, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the robot autonomous mapping method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the robot autonomous mapping method of any of claims 1-7.
CN202311421414.7A 2023-10-26 2023-10-26 Autonomous robot mapping method, device and system, electronic equipment and readable storage medium Pending CN117422774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311421414.7A CN117422774A (en) 2023-10-26 2023-10-26 Autonomous robot mapping method, device and system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311421414.7A CN117422774A (en) 2023-10-26 2023-10-26 Autonomous robot mapping method, device and system, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117422774A true CN117422774A (en) 2024-01-19

Family

ID=89528020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311421414.7A Pending CN117422774A (en) 2023-10-26 2023-10-26 Autonomous robot mapping method, device and system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117422774A (en)

Similar Documents

Publication Publication Date Title
CN108827249B (en) Map construction method and device
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN109163722B (en) Humanoid robot path planning method and device
US20170181383A1 (en) Pruning Robot System
EP3974778A1 (en) Method and apparatus for updating working map of mobile robot, and storage medium
CN111728535B (en) Method and device for generating cleaning path, electronic equipment and storage medium
KR20220025028A (en) Method and device for building beacon map based on visual beacon
CN111744199B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN115267796B (en) Positioning method, positioning device, robot and storage medium
WO2024087962A1 (en) Truck bed orientation recognition system and method, and electronic device and storage medium
CN111695497B (en) Pedestrian recognition method, medium, terminal and device based on motion information
CN113907645A (en) Mobile robot positioning method and device, storage medium and electronic device
CN111523334B (en) Virtual exclusion zone setting method and device, terminal equipment, tag and storage medium
CN116109657A (en) Geographic information data acquisition processing method, system, electronic equipment and storage medium
CN113052761B (en) Laser point cloud map fusion method, device and computer readable storage medium
CN113671523A (en) Robot positioning method, device, storage medium and robot
CN117422774A (en) Autonomous robot mapping method, device and system, electronic equipment and readable storage medium
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN114459483B (en) Landmark navigation map construction and application method and system based on robot navigation
CN114577216A (en) Navigation map construction method and device, robot and storage medium
CN113947716A (en) Closed loop detection method, closed loop detection device, robot and storage medium
CN117433508A (en) Laser mapping method and system based on multi-robot cooperative work
CN115325959A (en) Three-dimensional scanning system and method
CN117456120A (en) Mobile robot-based large scene map construction method and system
CN116266402A (en) Automatic object labeling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination