CN111881239B - Construction method, construction device, intelligent robot and readable storage medium - Google Patents

Construction method, construction device, intelligent robot and readable storage medium Download PDF

Info

Publication number
CN111881239B
CN111881239B CN202010690962.XA CN202010690962A CN111881239B CN 111881239 B CN111881239 B CN 111881239B CN 202010690962 A CN202010690962 A CN 202010690962A CN 111881239 B CN111881239 B CN 111881239B
Authority
CN
China
Prior art keywords
dimensional
intelligent robot
point cloud
point clouds
grids
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010690962.XA
Other languages
Chinese (zh)
Other versions
CN111881239A (en
Inventor
宋乐
郭阳全
陈侃
霍峰
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010690962.XA priority Critical patent/CN111881239B/en
Publication of CN111881239A publication Critical patent/CN111881239A/en
Application granted granted Critical
Publication of CN111881239B publication Critical patent/CN111881239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • G06T3/06
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The application discloses a method for constructing a grid map, which is applied to an intelligent robot, wherein the intelligent robot comprises a laser sensor, and the method for constructing the grid map comprises the steps of acquiring a three-dimensional point cloud detected by the laser sensor; dividing a three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking an intelligent robot as an origin; respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; and constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud. The application also discloses a grid map construction device, an intelligent robot and a computer readable storage medium. The construction method of the embodiment of the application can save more three-dimensional characteristic information and reduce the calculated amount, thereby not only meeting the richness of information, but also ensuring lower hardware cost.

Description

Construction method, construction device, intelligent robot and readable storage medium
Technical Field
The present invention relates to the field of robot mapping technology, and more particularly, to a method and apparatus for constructing a grid map, an intelligent robot, and a non-volatile computer readable storage medium.
Background
The number of the point clouds emitted by the two-dimensional laser radar is small, so that a processor with too strong calculation power is not needed in a processing algorithm, and the two-dimensional laser radar is low in price and therefore suitable for robots with low pricing. In environments with a high degree of structuring and unique features, the use of two-dimensional lidar is sufficient to describe the environment. However, in an environment with a similar structure, because the two-dimensional point cloud is too few, a situation of mispositioning due to scene approximation may occur in the subsequent robot positioning process, and more environmental information needs to be acquired by using the three-dimensional laser radar. If the three-dimensional point cloud is directly used for constructing a three-dimensional grid map, the defects of information redundancy and too large calculation amount are brought.
Disclosure of Invention
In view of this, the present invention aims to solve, at least to some extent, one of the problems in the related art. For this reason, embodiments of the present application provide a method of constructing a grid map, a construction apparatus, an intelligent robot, and a non-volatile computer-readable storage medium.
The grid map construction method is applied to an intelligent robot, wherein the intelligent robot comprises a laser sensor, and the construction method comprises the steps of obtaining a three-dimensional point cloud detected by the laser sensor; dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angle resolution by taking the intelligent robot as an origin; respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; and constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
In the construction method of the embodiment of the application, the acquired three-dimensional point clouds are divided into a plurality of three-dimensional grids by using the intelligent robot as an origin according to the preset angular resolution, then two-dimensional characteristic point clouds corresponding to the three-dimensional grids are calculated respectively according to coordinate values of the three-dimensional point clouds in the three-dimensional grids, and finally a two-dimensional grid map is constructed according to the two-dimensional characteristic point clouds. Therefore, the construction method of the embodiment of the application can reduce the calculated amount while storing more three-dimensional characteristic information, thereby not only meeting the richness of information, but also ensuring lower hardware cost. Compared with the method for directly constructing the two-dimensional grid map by using the acquired two-dimensional point cloud, the two-dimensional grid map constructed by the construction method of the embodiment of the invention has more abundant information and is more robust to be positioned during positioning. Meanwhile, the two-dimensional grid map constructed by the construction method of the embodiment of the application can be used for positioning the intelligent robot and also can be used for path planning of the intelligent robot.
In some embodiments, the dividing the three-dimensional point cloud into a plurality of three-dimensional grids with the intelligent robot as an origin according to a preset angular resolution includes: converting the three-dimensional point cloud from the laser sensor coordinate system into the body coordinate system of the intelligent robot; and dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot as an origin and taking a plane formed by an x axis and a y axis as a reference in the machine body coordinate system.
In this embodiment, the three-dimensional point cloud is first converted into the body coordinate system, and then the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angular resolution with respect to a plane formed by the x axis and the y axis. Therefore, statistics and analysis of the three-dimensional point clouds in each three-dimensional grid are facilitated, and three-dimensional features of the three-dimensional point clouds in each three-dimensional grid can be extracted better.
In some embodiments, the calculating, according to the coordinate values of the three-dimensional point clouds in each three-dimensional grid, the two-dimensional feature point clouds corresponding to each three-dimensional grid includes: acquiring x coordinates and y coordinates of three-dimensional point clouds in each three-dimensional grid; classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold value, into a group of approximate point clouds; and obtaining representative point clouds of each group of the approximate point clouds according to x coordinates and y coordinates in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form the two-dimensional characteristic point clouds corresponding to the three-dimensional grid.
In this embodiment, three-dimensional point clouds in each three-dimensional grid, in which the sum of differences between x coordinates and differences between y coordinates is within a difference threshold, are classified into a set of approximate point clouds, and then representative point clouds are obtained according to the x coordinates and the y coordinates of the approximate point clouds, and the representative point clouds form two-dimensional feature point clouds corresponding to the three-dimensional grids. Therefore, the two-dimensional characteristic point cloud is obtained by classifying a plurality of three-dimensional point clouds and then calculating, and the two-dimensional characteristic point cloud can better keep characteristic information of the three-dimensional point cloud, so that the information quantity of the constructed two-dimensional grid map is more abundant.
In some embodiments, the constructing a two-dimensional grid map according to the two-dimensional feature point cloud includes: in the body coordinate system of the intelligent robot, a first map is constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate and the smallest y coordinate in the two-dimensional characteristic point cloud; dividing the first map into a plurality of grids according to a preset map resolution; and marking whether a plurality of the grids are obstacles to form the two-dimensional grid map.
In this embodiment, the first map is first constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate, and the smallest y coordinate in the two-dimensional feature point cloud, so that the range of the first map is wider. Further, the first map is divided into a plurality of grids by the preset map resolution; and marks whether the grids are barriers or not to form a two-dimensional grid map, so that the two-dimensional grid map is more accurate and can be used for navigation and positioning.
In some embodiments, the marking whether a plurality of the grids are obstacles to form the two-dimensional grid map includes: traversing each representative point cloud by taking the intelligent robot as an origin; and determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form the two-dimensional grid map.
In this embodiment, each representative point cloud is traversed by using the intelligent robot as an origin, and whether each grid is an obstacle is determined according to the traversing result, so that the obstacle mark in the two-dimensional grid map is more accurate, and meanwhile, the information of the obstacle in the two-dimensional grid map is richer because the representative point cloud is obtained from the three-dimensional point cloud, and the two-dimensional grid map is more accurate when being used for navigation and positioning.
In some embodiments, the constructing method further includes calculating an initial pose of the intelligent robot at a current time based on the pose of the intelligent robot at a previous time and a travel distance of the intelligent robot; acquiring a two-dimensional characteristic point cloud of the intelligent robot at the current moment; and calculating the real pose of the intelligent robot based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
In the embodiment, the initial pose of the intelligent robot is calculated first, then the two-dimensional characteristic point cloud calculated by the intelligent robot at the current moment is obtained, and finally the real pose of the intelligent robot is calculated based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment, so that the intelligent robot is positioned more accurately.
In some embodiments, the building method further includes expanding the two-dimensional grid map based on the real pose and a two-dimensional feature point cloud at the current time.
In this embodiment, based on the real pose and the two-dimensional feature point cloud of the intelligent robot at the current moment, the two-dimensional grid map is expanded, so that the two-dimensional grid map can be continuously updated, and the two-dimensional grid map can be continuously expanded in the running process of the intelligent robot, so that the intelligent robot can accurately navigate in real time.
The grid map construction device is applied to an intelligent robot, the intelligent robot comprises a laser sensor, the construction device comprises a first acquisition module, a division module, a first calculation module and a construction module, and the first acquisition module is used for acquiring three-dimensional point clouds detected by the laser sensor; the dividing module is used for dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot as an origin; the first calculation module is used for calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids respectively; the construction module is used for constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
In the construction device of the embodiment of the application, the acquired three-dimensional point clouds are divided into a plurality of three-dimensional grids by using the intelligent robot as an origin according to the preset angular resolution, then two-dimensional characteristic point clouds corresponding to the three-dimensional grids are calculated respectively according to coordinate values of the three-dimensional point clouds in the three-dimensional grids, and a two-dimensional grid map is constructed according to the two-dimensional characteristic point clouds. Therefore, the construction method of the embodiment of the application can save more three-dimensional characteristic information and reduce the calculated amount, thereby not only meeting the richness of the information, but also ensuring lower hardware cost. Compared with the method for directly constructing the two-dimensional grid map by using the acquired two-dimensional point cloud, the two-dimensional grid map constructed by the construction method of the embodiment of the invention has more abundant information and is more robust to be positioned during positioning. Meanwhile, the two-dimensional grid map constructed by the construction method of the embodiment of the application can be used for positioning the intelligent robot and also can be used for path planning of the intelligent robot.
In some embodiments, the partitioning module is further configured to convert the three-dimensional point cloud from the laser sensor coordinate system into a body coordinate system of the intelligent robot; and dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot as an origin and taking a plane formed by an x axis and a y axis as a reference in the machine body coordinate system.
In this embodiment, the three-dimensional point cloud is first converted into the body coordinate system, and then the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angular resolution with respect to a plane formed by the x axis and the y axis. Therefore, statistics and analysis of the three-dimensional point clouds in each three-dimensional grid are facilitated, and three-dimensional features of the three-dimensional point clouds in each three-dimensional grid can be better extracted.
In some embodiments, the first computing module may be further configured to obtain x-coordinates and y-coordinates of a three-dimensional point cloud within each of the three-dimensional grids; classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold value, into a group of approximate point clouds; and obtaining representative point clouds of each group of the approximate point clouds according to x coordinates and y coordinates in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form the two-dimensional characteristic point clouds corresponding to the three-dimensional grid.
In this embodiment, three-dimensional point clouds in each three-dimensional grid, in which the sum of differences between x coordinates and differences between y coordinates is within a difference threshold, are classified into a set of approximate point clouds, and then representative point clouds are obtained according to the x coordinates and the y coordinates of the approximate point clouds, and the representative point clouds form two-dimensional feature point clouds corresponding to the three-dimensional grids. Therefore, the two-dimensional characteristic point cloud is obtained by classifying a plurality of three-dimensional point clouds and then calculating, and the two-dimensional characteristic point cloud can better keep characteristic information of the three-dimensional point cloud, so that the information quantity of the constructed two-dimensional grid map is more abundant.
In some embodiments, the building module is further configured to build a first map according to a maximum x coordinate, a maximum y coordinate, a minimum x coordinate, and a minimum y coordinate in the two-dimensional feature point cloud in a body coordinate system of the intelligent robot; dividing the first map into a plurality of grids according to a preset map resolution; and marking whether a plurality of the grids are obstacles to form the two-dimensional grid map.
In this embodiment, the first map is first constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate, and the smallest y coordinate in the two-dimensional feature point cloud, so that the range of the first map is wider. Further, the first map is divided into a plurality of grids by the preset map resolution; and marks whether the grids are barriers or not to form a two-dimensional grid map, so that the two-dimensional grid map is more accurate and can be used for navigation and positioning.
In some embodiments, the building module is further configured to traverse each of the representative point clouds with the intelligent robot as an origin; and determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form the two-dimensional grid map.
In this embodiment, each representative point cloud is traversed by using the intelligent robot as an origin, and whether each grid is an obstacle is determined according to the traversing result, so that the obstacle mark in the two-dimensional grid map is more accurate, and meanwhile, the information of the obstacle in the two-dimensional grid map is richer because the representative point cloud is obtained from the three-dimensional point cloud, and the two-dimensional grid map is more accurate when being used for navigation and positioning.
In some embodiments, the construction device further includes a second calculation module, a second acquisition module, and a third calculation module, where the second calculation module is configured to calculate an initial pose of the intelligent robot at a current time based on a pose of the intelligent robot at a previous time and a travel distance of the intelligent robot; the second acquisition module is used for acquiring a two-dimensional characteristic point cloud of the intelligent robot at the current moment; the third calculation module is further used for calculating the real pose of the intelligent robot based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
In the embodiment, the initial pose of the intelligent robot is calculated first, then the characteristic point cloud calculated by the intelligent robot at the current moment is obtained, and finally the real pose of the intelligent robot is calculated based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment, so that the intelligent robot is positioned more accurately.
In some embodiments, the construction device further includes an extension module, where the extension module is configured to extend the two-dimensional grid map based on the real pose and the two-dimensional feature point cloud at the current time.
In this embodiment, based on the real pose and the two-dimensional feature point cloud of the intelligent robot at the current moment, the two-dimensional grid map is expanded, so that the two-dimensional grid map can be continuously updated, and the two-dimensional grid map can be continuously expanded in the running process of the intelligent robot, so that the intelligent robot can accurately navigate in real time.
The intelligent robot of an embodiment of the present application comprises one or more processors, a memory, and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for the construction method of any of the embodiments described above.
In the intelligent robot of the embodiment of the application, the acquired three-dimensional point clouds are divided into a plurality of three-dimensional grids by using the intelligent robot as an origin according to a preset angle resolution, then two-dimensional characteristic point clouds corresponding to each three-dimensional grid are calculated respectively according to coordinate values of the three-dimensional point clouds in each three-dimensional grid, and a two-dimensional grid map is constructed according to the two-dimensional characteristic point clouds. Therefore, the construction method of the embodiment of the application can save more three-dimensional characteristic information and reduce the calculated amount, thereby not only meeting the richness of the information, but also ensuring lower hardware cost. Compared with the method for directly constructing the two-dimensional grid map by using the acquired two-dimensional point cloud, the two-dimensional grid map constructed by the construction method of the embodiment of the invention has more abundant information and is more robust to be positioned during positioning. Meanwhile, the two-dimensional grid map constructed by the construction method of the embodiment of the application can be used for positioning the intelligent robot and also can be used for path planning of the intelligent robot.
A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the construction method of any of the above embodiments.
In a non-volatile computer readable storage medium containing computer executable instructions, an acquired three-dimensional point cloud is divided into a plurality of three-dimensional grids by an intelligent robot as an origin according to a preset angle resolution, then two-dimensional feature point clouds corresponding to the three-dimensional grids are calculated according to coordinate values of the three-dimensional point clouds in the three-dimensional grids, and a two-dimensional grid map is constructed according to the two-dimensional feature point clouds. Therefore, the construction method of the embodiment of the application can save more three-dimensional characteristic information and reduce the calculated amount, thereby not only meeting the richness of the information, but also ensuring lower hardware cost. Compared with the method for directly constructing the two-dimensional grid map by using the acquired two-dimensional point cloud, the two-dimensional grid map constructed by the construction method of the embodiment of the invention has more abundant information and is more robust to be positioned during positioning. Meanwhile, the two-dimensional grid map constructed by the construction method of the embodiment of the application can be used for positioning the intelligent robot and also can be used for path planning of the intelligent robot.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic flow chart of a construction method according to an embodiment of the present application;
FIG. 2 is a block schematic diagram of an intelligent robot according to an embodiment of the present application;
FIG. 3 is a block schematic diagram of a build apparatus according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a laser sensor according to an embodiment of the present application;
fig. 5 is a schematic structural view of the intelligent robot according to the embodiment of the present application;
FIG. 6 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 7 is a schematic diagram of a construction method according to an embodiment of the present application;
FIG. 8 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 9 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 10 is a schematic diagram of a construction method of an embodiment of the present application;
FIG. 11 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 12 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 13 is a block schematic diagram of a build apparatus according to an embodiment of the present application;
FIG. 14 is a flow diagram of a method of construction of an embodiment of the present application;
FIG. 15 is a block schematic diagram of a build apparatus according to an embodiment of the present application;
fig. 16 is a schematic diagram of a connection relationship between a computer-readable storage medium and a processor according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings refer to the same or similar elements or elements having the same or similar functions throughout.
In addition, the embodiments of the present application described below in conjunction with the drawings are exemplary only and are not to be construed as limiting the present application.
In this application, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature "above," "over" and "on" a second feature may be a first feature directly above or obliquely above the second feature, or simply indicate that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
In some task-oriented applications, the map construction is a key for realizing autonomous navigation of the intelligent robot and completing complex tasks in an unknown environment, and the perception capability and the intelligence level of the intelligent robot are also intensively embodied. The map construction is to record the current environment by reading the sensor information of the robot, and the current environment is used as the basis for positioning and planning tasks of the following intelligent robot. Thus, mapping is critical to intelligent robots.
Referring to fig. 1 to 3, the method for constructing a grid map according to the embodiment of the present application is applied to an intelligent robot 100, the intelligent robot 100 includes a laser sensor 10, and the method for constructing the grid map includes the following steps:
011: acquiring a three-dimensional point cloud detected by the laser sensor 10;
012: dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot 100 as an origin;
013: respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; and
014: and constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
The grid map constructing apparatus 200 of the embodiment of the present application is applied to the intelligent robot 100, and the constructing apparatus 200 includes a first obtaining module 210, a dividing module 220, a first calculating module 230, and a constructing module 240, where the first obtaining module 210, the dividing module 220, the first calculating module 230, and the constructing module 240 may be used to implement step 011, step 012, step 013, and step 014, respectively. That is, the first acquisition module 210 is configured to acquire the three-dimensional point cloud detected by the laser sensor 10; the dividing module 220 is configured to divide the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution with the intelligent robot 100 as an origin; the first calculating module 230 is configured to calculate two-dimensional feature point clouds corresponding to each three-dimensional grid according to coordinate values of the three-dimensional point clouds in each three-dimensional grid; the construction module 240 is configured to construct a two-dimensional grid map according to the two-dimensional feature point cloud.
The intelligent robot 100 of the present embodiment includes one or more processors 20, a memory 30, and one or more programs, wherein the one or more programs are stored in the memory 30 and executed by the one or more processors 20, the programs including a construction method for performing the present embodiment. When the processor 20 executes the program, the processor 20 may be configured to implement step 011, step 012, step 013, and step 014. That is, the processor 20 may be configured to: acquiring a three-dimensional point cloud detected by the laser sensor 10; dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot 100 as an origin; respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; and constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
In the method for constructing the grid map, the construction device 200 and the intelligent robot 100 according to the embodiments of the present application, the obtained three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot 100 as an origin, then two-dimensional feature point clouds corresponding to each three-dimensional grid are calculated according to coordinate values of the three-dimensional point clouds in each three-dimensional grid, and finally a two-dimensional grid map is constructed according to the two-dimensional feature point clouds. Therefore, the two-dimensional characteristic point cloud is obtained by calculation according to the coordinate values of the three-dimensional point cloud, and the two-dimensional grid map constructed by the two-dimensional characteristic point cloud can save more three-dimensional information and reduce the calculated amount, so that the richness of map information is met and the lower hardware cost is ensured. Compared with the method for directly constructing the two-dimensional grid map by using the acquired two-dimensional point cloud, the two-dimensional grid map constructed by the construction method of the embodiment of the invention has more abundant information and is more robust to be positioned during positioning. Meanwhile, the two-dimensional grid map constructed by the construction method of the embodiment of the application not only can be used for positioning the intelligent robot 100, but also can be used for path planning of the intelligent robot 100.
The intelligent robot 100 may be, in particular, an industrial robot, an agricultural robot, a home robot, a service robot, a cleaning robot, etc., without limitation. Further, the cleaning robot may be an intelligent robot 100 such as a sweeper, a scrubber, a cleaner, etc. The intelligent robot 100 may also include elements such as a communication interface 40, a cleaning implement, and the like. The intelligent robot 100 may be used to clean floors, tiles, pavement, or surfaces such as cement.
Specifically, the intelligent robot 100 includes a laser sensor 10, and the laser sensor 10 can detect a three-dimensional point cloud around the intelligent robot 100. The laser sensor 10 may specifically be a mechanical multidimensional laser radar, where the multidimensional laser radar includes a plurality of laser transmitters arranged vertically, and each laser transmitter has a certain included angle. The plurality of laser transmitters drive to rotate through the fixed rotating shaft, so that a plurality of three-dimensional point clouds can be obtained. Referring to fig. 4, each laser emitter emits a point, and a dotted line is formed after one rotation, and the dotted line becomes a point cloud harness of the laser. For a point cloud wire harness, the included angle alpha between two adjacent laser point clouds and an emission origin is called as the angular resolution of the laser radar, and the angular resolution is determined by the rotation speed of a rotation shaft of the laser emitter and the emission frequency of the laser emitter.
In the embodiment shown in fig. 4, the multi-dimensional laser radar includes four laser transmitters arranged vertically, each of which can form a laser beam, each of which includes a plurality of three-dimensional point clouds, and the four laser transmitters are formed with four laser beams, such as the laser beam L1, the laser beam L2, the laser beam L3 and the laser beam L4 in fig. 4. The laser emission frequency of each laser emitter may be the same or different, and is not limited herein.
In step 011, the three-dimensional point cloud detected by the laser sensor 10 is acquired, and the three-dimensional point cloud can more comprehensively and more truly reflect the environmental information around the intelligent robot 100.
In step 012, the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angular resolution using the intelligent robot 100 as an origin. The preset angular resolution may be preset, or may be user-defined. The predetermined angular resolution may be the same as or different from the angular resolution of the lidar. Specifically, a three-dimensional figure in a cylindrical or prismatic shape can be built according to the obtained multiple three-dimensional point clouds, and then the built three-dimensional figure is divided into multiple three-dimensional grids according to a preset angular resolution, so that the three-dimensional point clouds can be divided into the multiple three-dimensional grids. By dividing the three-dimensional point cloud into a plurality of three-dimensional grids, one or more three-dimensional point clouds can be included in each three-dimensional grid, so that the three-dimensional point clouds can be classified, and characteristic information of the three-dimensional point clouds in each three-dimensional grid can be better extracted.
In step 013, two-dimensional feature point clouds corresponding to the three-dimensional grids are calculated according to the coordinate values of the three-dimensional point clouds in the three-dimensional grids. According to the coordinate values of the three-dimensional point clouds in each three-dimensional grid, for example, according to the x coordinate and the y coordinate of the three-dimensional point clouds, or according to the x coordinate and the z coordinate of the three-dimensional point clouds, or according to the y coordinate and the z coordinate of the three-dimensional point clouds, and further, according to the z coordinate, the y coordinate and the z coordinate of the three-dimensional point clouds, respectively calculating the two-dimensional characteristic point clouds corresponding to each three-dimensional grid. Because the two-dimensional characteristic point cloud is calculated according to the coordinate values of the three-dimensional point cloud in each three-dimensional grid, the two-dimensional characteristic point cloud can store more three-dimensional characteristic information of the three-dimensional point cloud in the three-dimensional grid.
In step 014, a two-dimensional grid map is constructed from the two-dimensional feature point cloud. And after the two-dimensional characteristic point clouds corresponding to the three-dimensional grids are obtained through calculation, constructing a two-dimensional grid map according to the two-dimensional characteristic point clouds. Because the two-dimensional grid map is two-dimensional, the calculation amount can be reduced, and the two-dimensional grid map can be realized by adopting simpler hardware, so that the hardware cost is reduced. Meanwhile, as the two-dimensional grid map is built according to the two-dimensional characteristic point cloud, compared with the two-dimensional grid map built by directly using the two-dimensional laser point cloud, the information content in the two-dimensional grid map is more abundant, and the two-dimensional grid map is more robust when being used for positioning. Meanwhile, compared with a point cloud map established by directly using a three-dimensional laser point cloud, the two-dimensional grid map not only can be used for positioning the intelligent robot 100, but also can be used for path planning and other purposes of the intelligent robot 100, and the two-dimensional grid map is wider in application range.
Referring to fig. 5, in some embodiments, the intelligent robot 100 has mounted thereon a laser sensor 10, an encoder 51, a global positioning system (Global Positioning System, GPS) sensor 52, and/or a Beidou satellite positioning system (BeiDou Navigation Satellite System, BDS) sensor and the like. Meanwhile, one or more of inertial measurement units (Inertial measurement unit, IMU), ultra Wide Band (UWB) sensors, radio frequency identification (Radio Frequency Identification, RFID) sensors, image sensors (such as cameras, video cameras, etc.) and the like can be optionally installed, so that the intelligent robot 100 can be suitable for different working scenes, and meanwhile, the intelligent robot 100 can be more accurate in drawing, positioning and navigation.
The encoder 51 needs to be installed on the wheel axis of the intelligent robot 100 to enable track deduction, so that the encoder 51 can calculate the distance travelled by the intelligent robot 100 more accurately. Meanwhile, when acquiring data of a plurality of sensors, time stamps of the plurality of sensors need to be aligned so that the acquired data of each sensor are synchronous in time, and the acquired data are more accurate.
Referring to fig. 6 and 7, in some embodiments, step 012 includes the steps of:
0121: converting the three-dimensional point cloud from the laser sensor 10 coordinate system into the body coordinate system of the intelligent robot 100; and
0122: in the machine body coordinate system, the intelligent robot 100 is used as an origin, and a plane formed by an x axis and a y axis is used as a reference, so that the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angle resolution beta.
In some embodiments, the partitioning module 220 may also be used to convert the three-dimensional point cloud from the laser sensor 10 coordinate system into the body coordinate system of the intelligent robot 100; and dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angle resolution beta by taking the intelligent robot 100 as an origin and taking a plane formed by an x axis and a y axis as a reference in a machine body coordinate system. That is, the partitioning module 220 may also be used to implement steps 0121 and 0122.
In some embodiments, the processor 20 may also be configured to convert the three-dimensional point cloud from the laser sensor 10 coordinate system into the body coordinate system of the intelligent robot 100; and dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angle resolution beta by taking the intelligent robot 100 as an origin and taking a plane formed by an x axis and a y axis as a reference in a machine body coordinate system. That is, the processor 20 may also be used to implement steps 0121 and 0122.
Specifically, after the three-dimensional laser point cloud is obtained, the three-dimensional point cloud is converted from the coordinate system of the laser sensor 10 to the body coordinate system of the intelligent robot 100, so that the coordinates of the three-dimensional point cloud can be unified, the intelligent robot 100 can conveniently recognize, and the data can be more conveniently used (for example, drawing, navigation and the like) later. The specific process of converting the three-dimensional point cloud from the laser sensor 10 coordinate system into the body coordinate system of the intelligent robot 100 is not described in detail herein. In the body coordinate system, x is a direction pointing to the front of the intelligent robot 100, y is a direction pointing to the left of the intelligent robot 100, and z is a direction pointing to the top of the intelligent robot 100.
Referring to fig. 7, in the body coordinate system, the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angular resolution β with respect to a plane formed by the x-axis and the y-axis with the intelligent robot 100 as an origin. That is, the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to the angular resolution β with the xoy plane as a reference and the center of the intelligent robot 100 as the origin, and each three-dimensional grid includes all three-dimensional point clouds within the angular range corresponding to the three-dimensional grid. In one example, the angular resolution β is 5 °, and the three-dimensional point cloud is divided into 72 three-dimensional grids; in yet another example, the angular resolution β is 10 °, and the three-dimensional point cloud is divided into 36 three-dimensional grids; in yet another example, the angular resolution is 24 °, and the three-dimensional point cloud is divided into 15 three-dimensional grids.
The three-dimensional point cloud is divided into a plurality of three-dimensional grids, so that the three-dimensional point cloud in each three-dimensional grid can be calculated, counted and analyzed, and the three-dimensional characteristics of the three-dimensional point cloud in each three-dimensional grid can be extracted better. The angular resolution β may be the same as or different from the angular resolution α of the lidar, and is not limited herein.
Referring to fig. 7 and 8, in some embodiments, step 013 comprises the steps of:
0131: acquiring x coordinates and y coordinates of three-dimensional point clouds in each three-dimensional grid;
0132: classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold, into a group of approximate point clouds; and
0133: and acquiring the representative point clouds of each group of the approximate point clouds according to the x coordinate and the y coordinate in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form two-dimensional characteristic point clouds corresponding to the three-dimensional grid.
In some embodiments, the first computing module 230 is further configured to obtain x-coordinates and y-coordinates of a three-dimensional point cloud within each three-dimensional grid; classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold, into a group of approximate point clouds; and obtaining the representative point clouds of each group of the approximate point clouds according to the x coordinate and the y coordinate in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form two-dimensional characteristic point clouds corresponding to the three-dimensional grid. That is, the first computing module 230 is also used to implement step 0131, step 0132, and step 0133.
In some embodiments, the processor 20 is further configured to obtain x-coordinates and y-coordinates of a three-dimensional point cloud within each three-dimensional grid; classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold, into a group of approximate point clouds; and obtaining the representative point clouds of each group of the approximate point clouds according to the x coordinate and the y coordinate in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form two-dimensional characteristic point clouds corresponding to the three-dimensional grid. That is, the processor 20 is also configured to implement step 0131, step 0132, and step 0133.
Specifically, coordinates of three-dimensional point clouds in each three-dimensional grid include x coordinates, y coordinates and z coordinates, the x coordinates and the y coordinates of the three-dimensional point clouds in each three-dimensional grid are obtained, then differences between the x coordinates and differences between the y coordinates of each three-dimensional point cloud are calculated respectively, and three-dimensional point clouds with differences between the x coordinates and differences between the y coordinates within a difference threshold are classified as a group of approximate point clouds. It is understood that the multiple three-dimensional point clouds in each set of approximate point clouds are on or near the same vertical line segment. The difference threshold may be a debugging experience value obtained by multiple times of debugging by the user, may be preset, or may be user-defined, for example, the difference threshold may be a value of 0.05, 0.06, 0.04, 0.08, etc., which is not limited herein.
It may be appreciated that one or more groups of approximate point clouds may be included in each three-dimensional grid, for example, each three-dimensional grid includes two, three, four, five or more groups of approximate point clouds, each group of approximate point clouds includes at least one three-dimensional point cloud, and a representative point cloud of each group of approximate point clouds is obtained according to an x coordinate and a y coordinate of the three-dimensional point cloud in the approximate point clouds, for example, an average value of the x coordinate and the y coordinate of a plurality of three-dimensional point clouds in the approximate point clouds is calculated, and then the calculated average value is used as a coordinate of the representative point cloud; or calculating the average value of the x coordinates and the average value of the y coordinates of a plurality of three-dimensional point clouds in the approximate point clouds, then calculating the difference value between the x coordinates and the average value of the x coordinates of each three-dimensional point cloud and the difference value between the y coordinates and the y coordinates of each three-dimensional point cloud, taking the x coordinate and the y coordinate with the smallest difference value as the coordinates of the representative point cloud, or taking the difference value between the average value of the x coordinates and the difference value between the y coordinates and the x coordinate and the y coordinate of the smallest three-dimensional point cloud as the coordinates of the representative point cloud.
Further, the representative point cloud is calculated according to the x coordinates and the y coordinates of a plurality of three-dimensional point clouds in each set of approximate point clouds, so that the representative point cloud can more carry three-dimensional features contained in the three-dimensional point clouds in the set of approximate point clouds. Each three-dimensional grid can be internally provided with one or more representative point clouds, and all the representative point clouds in each three-dimensional grid are combined to form a two-dimensional characteristic point cloud corresponding to the three-dimensional grid. It can be understood that the two-dimensional characteristic point cloud is not an independent point cloud, but is a collection of a plurality of representative point clouds, so that the two-dimensional characteristic point cloud can be provided with more three-dimensional characteristics contained in the three-dimensional point cloud in the corresponding three-dimensional grid, the information content in the two-dimensional grid map established based on the two-dimensional characteristic point cloud is richer, and the two-dimensional grid map is more accurate when being used for navigation and positioning.
More specifically, referring to fig. 4 and 7, in one embodiment, for each three-dimensional grid, the difference between the x-coordinate and the y-coordinate of the three-dimensional point cloud and the last three-dimensional point cloud is calculated from the second three-dimensional point cloud according to the sequence of the laser beam in which the three-dimensional point cloud is located, and when the sum of the difference between the x-coordinate and the difference between the y-coordinate is less than or equal to the difference threshold, the three-dimensional point cloud and the last three-dimensional point cloud are considered to be on the same vertical object. Similarly, continuing to calculate the difference value of the x coordinate and the difference value of the y coordinate between the third three-dimensional point cloud and the second three-dimensional point cloud, the difference value of the x coordinate and the difference value of the y coordinate between the fourth three-dimensional point cloud and the third three-dimensional point cloud … … until the difference value of the x coordinate and the difference value of the y coordinate between the nth three-dimensional point cloud and the (n-1) th three-dimensional point cloud, and if the sum of the coordinate difference values between the three-dimensional point cloud and the last three-dimensional point cloud is smaller than or equal to a difference value threshold value, considering the three-dimensional point cloud and the last three-dimensional point cloud as approximate point clouds; if the sum of the coordinate differences between the three-dimensional point cloud and the last three-dimensional point cloud is greater than a difference threshold, the three-dimensional point cloud and the last three-dimensional point cloud are not considered to be approximate point clouds, and a set of approximate point clouds is redefined from the three-dimensional point clouds.
That is, within each three-dimensional grid, three-dimensional point clouds belonging to different laser beams are traversed from top to bottom, all three-dimensional point clouds within a difference threshold of the sum of the x-coordinate difference and the y-coordinate difference are found, and the set of three-dimensional point clouds are classified as approximate point clouds. It can be understood that the three-dimensional point clouds in each group of the approximate point clouds are all on the same vertical object, and the three-dimensional point clouds in each group of the approximate point clouds are all continuous, so that the three-dimensional characteristics of the three-dimensional point clouds in the approximate point clouds are rich, the three-dimensional point clouds are stable, the representative point clouds obtained through calculation of the three-dimensional point clouds can have more three-dimensional characteristics, and the representative point clouds are more accurate.
Further, in one embodiment, when the number of the three-dimensional point clouds in which the sum of the x-coordinate difference and the y-coordinate difference is within the difference threshold is greater than or equal to the number threshold, the three-dimensional point clouds are classified into a group of approximate point clouds, so that the situation that a single three-dimensional point cloud is avoided or the number of the three-dimensional point clouds is too small in the approximate point clouds can be avoided, the representative point clouds are obtained by calculating the x-coordinate and the y-coordinate of a plurality of three-dimensional point clouds, and the three-dimensional features of the representative point clouds are richer. The number threshold may be a fixed value or an empirical value obtained by multiple times of debugging, for example, the number threshold may be 2, 3, 4, 5, 6, 7, or more.
Referring to fig. 9 and 10, in some embodiments, step 014 includes the steps of:
0141: in the body coordinate system of the intelligent robot 100, a first map is constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate and the smallest y coordinate in the two-dimensional feature point cloud;
0142: dividing the first map into a plurality of grids according to a preset map resolution; and
0143: whether the plurality of grids are obstacles is marked to form a two-dimensional grid map.
In some embodiments, the building module 240 may be further configured to build the first map according to a maximum x coordinate, a maximum y coordinate, a minimum x coordinate, and a minimum y coordinate in the two-dimensional feature point cloud in the body coordinate system of the intelligent robot 100; dividing the first map into a plurality of grids according to a preset map resolution; and marking whether the plurality of grids are obstacles to form a two-dimensional grid map. That is, build module 240 may also be used to implement step 0141, step 0142, and step 0143.
In some embodiments, the processor 20 may be further configured to construct the first map according to a maximum x-coordinate, a maximum y-coordinate, a minimum x-coordinate, and a minimum y-coordinate in the two-dimensional feature point cloud in the body coordinate system of the intelligent robot 100; dividing the first map into a plurality of grids according to a preset map resolution; and marking whether the plurality of grids are obstacles to form a two-dimensional grid map. That is, processor 20 may also be used to implement step 0141, step 0142 and step 0143.
Specifically, each three-dimensional grid corresponds to a two-dimensional characteristic point cloud, each two-dimensional characteristic point cloud comprises one or more representative point clouds, and a first map is constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate and the smallest y coordinate in all the representative point clouds. For example, a straight line perpendicular to the x axis is drawn through the largest x coordinate, a straight line perpendicular to the x axis is drawn through the smallest x coordinate, a straight line perpendicular to the y axis is drawn through the largest y coordinate, a straight line perpendicular to the y axis is drawn through the smallest y coordinate, and a closed area formed by the four straight lines is the first map. Therefore, the range of the constructed first map is larger, and all the representative point clouds can be covered by the first map.
Further, the first map is divided into a plurality of grids according to a preset map resolution. The preset map resolution may be a preset fixed value, or may be set according to the number of the representative point clouds, for example, the more the representative point clouds are, the higher the map resolution is. Of course, the higher the map resolution, the more grids, and the more accurate the two-dimensional grid map. Then, whether each grid is an obstacle is identified, and the grids that are the obstacles are marked, and finally a two-dimensional grid map is formed. For example, the grid with the obstacle is marked black or the grid with the obstacle is marked with a character symbol so that the obstacle grid can be distinguished from other grids.
Referring to fig. 10 and 11, in some embodiments, step 0143 includes the steps of:
1431: traversing each representative point cloud by taking the intelligent robot 100 as an origin; and
1432: and determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form a two-dimensional grid map.
In some embodiments, the building module 240 may also be configured to traverse each representative point cloud using the intelligent robot 100 as an origin; and determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form a two-dimensional grid map. That is, the build module 240 may also be used to implement step 1431 and step 1432.
In some embodiments, the processor 20 may also be configured to traverse each representative point cloud using the intelligent robot 100 as an origin; and determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form a two-dimensional grid map. That is, the processor 20 may also be used to implement step 1431 and step 1432.
Specifically, please refer to fig. 10, which is a diagram of the intelligent robot 100, each of the representative point clouds is traversed to determine whether each grid in the two-dimensional grid map is an obstacle. Note that the origin of the intelligent robot 100 means that the center of the intelligent robot 100 is used as the origin. At the time of initially building the two-dimensional grid map, the center of the intelligent robot 100 and the origin of the body coordinate system are the same.
It will be appreciated that light projection (Raytrace) is performed with the origin of the body coordinate system as the starting point and each representative point cloud as the end point. Namely, a line is formed between the origin of the body coordinate system and the representative point cloud. When a probability of a hit is given to a grid where the representative point cloud is located, the grid is regarded as an obstacle grid, and the obstacle grid is marked as an obstacle region, and is marked black as shown in fig. 10. The grids through which the wires pass, that is, the grids through which the wires pass are given a probability of once being unobstructed, are considered to be unobstructed, and are marked as unobstructed areas, and are marked as white as shown in fig. 10. The grid through which the wire is not connected cannot be determined whether an obstacle exists in the grid, and the part of the grid can be marked as an unknown area. Therefore, each grid mark in the two-dimensional grid map is clear, so that the two-dimensional grid map is more accurate when being used for positioning, navigation and task planning.
The probability that the grid where the representative point cloud is located is given to one hit specifically means that light is projected from the origin to the grid where the representative point cloud is located for many times, and the probability that the grid is hit is greater than a probability threshold. The probability threshold may be a fixed value set in advance or an empirical value obtained by debugging for a plurality of times, for example, values of 0.5, 0.6, 0.7, 0.8, 0.9, etc.
Referring to fig. 12 and 13, in some embodiments, the method of construction further comprises the steps of:
015: calculating an initial pose of the intelligent robot 100 at the current moment based on the pose of the intelligent robot 100 at the previous moment and the driving distance of the intelligent robot 100;
016: acquiring a two-dimensional characteristic point cloud of the intelligent robot 100 at the current moment; and
017: and calculating the real pose of the intelligent robot 100 based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
In certain embodiments, build apparatus 200 further includes a second calculation module 250, a second acquisition module 260, and a third calculation module 270. The second calculation module 250 may be configured to calculate an initial pose of the intelligent robot 100 at a current moment based on the pose of the intelligent robot 100 at the previous moment and a travel distance of the intelligent robot 100; the second obtaining module 260 may be configured to obtain a two-dimensional feature point cloud of the intelligent robot 100 at the current moment; the third calculation module 270 may be configured to calculate the real pose of the intelligent robot 100 based on the two-dimensional feature point cloud at the current time and the two-dimensional grid map constructed at the previous time. That is, the second calculation module 250 may be used to implement step 015, the second acquisition module 260 may be used to implement step 016, and the third calculation module 270 may be used to implement step 017.
In some embodiments, the processor 20 may also be configured to calculate an initial pose of the intelligent robot 100 at a current time based on the pose of the intelligent robot 100 at the previous time and the distance traveled by the intelligent robot 100; acquiring a two-dimensional characteristic point cloud of the intelligent robot 100 at the current moment; and calculating the real pose of the intelligent robot 100 based on the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
Specifically, the pose P of the intelligent robot 100 at the previous time t1 Pose P t1 For the real pose of the intelligent robot 100 at the previous moment, the driving distance delta from the previous moment to the current moment of the intelligent robot 100 is obtained st12 For example, the travel distance δ of the intelligent robot 100 from the previous time to the current time may be measured by a sensor such as an odometer, GPS, UWB, or the like st12 Then the initial pose P 'of the intelligent robot 100 at the current moment' t2 =P t1st12 At this time, steps 011, 012, and 013 are performed, thereby acquiring the two-dimensional feature point cloud of the intelligent robot 100 at the current time.
Further, according to the initial pose P' t2 The two-dimensional grid map constructed at the previous moment and the two-dimensional characteristic point cloud at the current moment are used for adjusting the initial pose P 'through a correlation scanning matching (Correlative Scan Marcher, CSM) algorithm' t2 Finally, the true intelligent robot 100 is obtainedReal pose P t2 . Specifically, in the two-dimensional grid map, in a preset range near the initial pose, traversing all poses in the preset range according to preset coordinate resolution and preset angle resolution, and calculating the score of the two-dimensional characteristic point cloud obtained in the two-dimensional grid map when the pose is calculated, wherein the pose with the highest score is the actual pose of the intelligent robot 100 at the current moment. The preset range, the preset coordinate resolution and the preset angle resolution can be user-defined or fixed.
More specifically, the two-dimensional grid map is first gaussian-blurred to obtain the scores of the grids in the vicinity of the obstacle grid. Wherein the closer an obstacle grid is in one grid, the higher the score thereof. Placing the two-dimensional characteristic point clouds at the current moment into a two-dimensional grid map at the initial pose, wherein each two-dimensional characteristic point cloud can obtain a score of a grid where the two-dimensional characteristic point clouds are located, and calculating the total score at the initial pose, namely superposing the scores of each two-dimensional characteristic point cloud. And then continuously adjusting the pose according to the preset coordinate resolution and the preset angle resolution in a preset range near the initial pose, adjusting the position of the corresponding two-dimensional characteristic point cloud in the two-dimensional grid map, and calculating the total score of the two-dimensional characteristic point cloud in each pose. When the poses within the preset range have been traversed, the pose corresponding to the highest total score is the true pose of the intelligent robot 100.
Referring to fig. 14 and 15, in some embodiments, the method of constructing further comprises the steps of:
018: and expanding a two-dimensional grid map based on the real pose and the two-dimensional characteristic point cloud at the current moment.
In some embodiments, the construction apparatus 200 further includes an expansion module 280, where the expansion module 280 is configured to expand the two-dimensional grid map based on the real pose and the two-dimensional feature point cloud of the current time. That is, the extension module 280 is configured to implement step 018.
In some embodiments, the processor 20 is further configured to expand the two-dimensional grid map based on the real pose and the two-dimensional feature point cloud at the current time. That is, the processor 20 is configured to implement step 018.
Specifically, after obtaining the real pose of the intelligent robot 100, step 0141 may be executed to expand the range of the two-dimensional grid map based on the two-dimensional grid map of the previous moment according to the two-dimensional feature point cloud of the current moment, then divide the grids according to the preset map resolution for the expanded region, take the origin of the machine coordinate system as the starting point and take the two-dimensional feature point cloud of the current moment as the end point to perform light projection, so as to determine whether the grids in the expanded region are obstacles, and mark the grids in the expanded region to form a two-dimensional grid map with a larger range.
Meanwhile, when determining whether the grids in the extension area are barriers, the two-dimensional grid map at the previous moment can be updated, so that the two-dimensional grid map can be more accurate, and the intelligent robot 100 can be more accurate in navigation and positioning.
Referring to fig. 1 and 2 again, the memory 30 is used for storing a computer program that can be run on the processor 20, and the processor 20 implements the method for constructing the grid map according to any of the above embodiments when executing the program.
The memory 30 may comprise a high-speed RAM memory 30 or may further comprise a non-volatile memory 30, such as at least one disk memory 30. Further, the intelligent robot 100 may also include a communication interface 40, the communication interface 40 being used for communication between the memory 30 and the processor 20.
If the memory 30, the processor 20 and the communication interface 40 are implemented independently, the communication interface 40, the memory 30 and the processor 20 may be connected to each other and communicate with each other through a bus. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 2, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 30, the processor 20, and the communication interface 40 are integrated on a chip, the memory 30, the processor 20, and the communication interface 40 may communicate with each other through internal interfaces.
The processor 20 may be a central processing unit (Central Processing Unit, abbreviated as CPU) or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC) or one or more integrated circuits configured to implement embodiments of the present application.
Referring to fig. 16, a non-transitory computer-readable storage medium 300 of an embodiment of the present application includes computer-executable instructions 301, which when executed by one or more processors 400, cause the processors 400 to perform the method of constructing a grid map of any of the embodiments of the present application.
For example, referring to fig. 1, when the computing executable instructions are executed by the processor 400, the processor 400 is configured to perform the following steps:
011: acquiring a three-dimensional point cloud detected by the laser sensor 10;
012: dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angular resolution by taking the intelligent robot 100 as an origin;
013: respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; and
014: and constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
For another example, referring to fig. 12, when the computing executable 301 is executed by the processor 400, the processor 400 is configured to perform the steps of:
015: calculating an initial pose of the intelligent robot 100 at the current moment based on the pose of the intelligent robot 100 at the previous moment and the driving distance of the intelligent robot 100;
016: acquiring a two-dimensional characteristic point cloud of the intelligent robot 100 at the current moment; and
017: and calculating the real pose of the intelligent robot 100 based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes the processor 20, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, reference to the terms "certain embodiments," "one embodiment," "some embodiments," "an exemplary embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, for example two, three, unless explicitly defined otherwise.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application, which is defined by the claims and their equivalents.

Claims (9)

1. A method for constructing a grid map applied to an intelligent robot, the intelligent robot comprising a laser sensor, the method comprising:
acquiring a three-dimensional point cloud detected by the laser sensor;
dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angle resolution by taking the intelligent robot as an origin;
according to the coordinate values of the three-dimensional point clouds in the three-dimensional grids, respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids; a kind of electronic device with high-pressure air-conditioning system
Constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud;
according to the coordinate values of the three-dimensional point clouds in the three-dimensional grids, respectively calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids, wherein the two-dimensional characteristic point clouds comprise:
acquiring x coordinates and y coordinates of three-dimensional point clouds in each three-dimensional grid;
Classifying three-dimensional point clouds in each three-dimensional grid, wherein the sum of differences between x coordinates and differences between y coordinates is within a difference threshold value, into a group of approximate point clouds; a kind of electronic device with high-pressure air-conditioning system
And obtaining representative point clouds of each group of the approximate point clouds according to x coordinates and y coordinates in each group of the approximate point clouds, wherein all the representative point clouds in the three-dimensional grid form the two-dimensional characteristic point clouds corresponding to the three-dimensional grid.
2. The method according to claim 1, wherein the dividing the three-dimensional point cloud into a plurality of three-dimensional grids with the intelligent robot as an origin according to a preset angular resolution includes:
converting the three-dimensional point cloud from the laser sensor coordinate system into the body coordinate system of the intelligent robot; a kind of electronic device with high-pressure air-conditioning system
In the machine body coordinate system, the intelligent robot is used as an origin, a plane formed by an x axis and a y axis is used as a reference, and the three-dimensional point cloud is divided into a plurality of three-dimensional grids according to a preset angle resolution.
3. The method of constructing a two-dimensional grid map according to claim 1, wherein constructing a two-dimensional grid map from the two-dimensional feature point cloud comprises:
in the body coordinate system of the intelligent robot, a first map is constructed according to the largest x coordinate, the largest y coordinate, the smallest x coordinate and the smallest y coordinate in the two-dimensional characteristic point cloud;
Dividing the first map into a plurality of grids according to a preset map resolution; a kind of electronic device with high-pressure air-conditioning system
Marking a plurality of the grids as obstacles to form the two-dimensional grid map.
4. The method of constructing according to claim 3, wherein said marking whether a plurality of said grids are obstacles to form said two-dimensional grid map comprises:
traversing each representative point cloud by taking the intelligent robot as an origin; a kind of electronic device with high-pressure air-conditioning system
And determining whether each grid is an obstacle according to the traversing result, and marking in the first map to form the two-dimensional grid map.
5. The construction method according to claim 1, characterized in that the construction method further comprises:
calculating the initial pose of the intelligent robot at the current moment based on the pose of the intelligent robot at the previous moment and the running distance of the intelligent robot;
acquiring a two-dimensional characteristic point cloud of the intelligent robot at the current moment; a kind of electronic device with high-pressure air-conditioning system
And calculating the real pose of the intelligent robot based on the initial pose, the two-dimensional characteristic point cloud at the current moment and the two-dimensional grid map constructed at the last moment.
6. The construction method according to claim 5, characterized in that the construction method further comprises:
And expanding the two-dimensional grid map based on the real pose and the two-dimensional characteristic point cloud at the current moment.
7. A grid map construction device applied to an intelligent robot, wherein the intelligent robot comprises a laser sensor, and the construction device is used for realizing the grid map construction method as set forth in claim 1, and the construction device comprises:
the first acquisition module is used for acquiring the three-dimensional point cloud detected by the laser sensor;
the dividing module is used for dividing the three-dimensional point cloud into a plurality of three-dimensional grids according to a preset angle resolution by taking the intelligent robot as an origin;
the first calculation module is used for calculating two-dimensional characteristic point clouds corresponding to the three-dimensional grids according to coordinate values of the three-dimensional point clouds in the three-dimensional grids; a kind of electronic device with high-pressure air-conditioning system
And the construction module is used for constructing a two-dimensional grid map according to the two-dimensional characteristic point cloud.
8. An intelligent robot, characterized in that the intelligent robot comprises:
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the build method of any of claims 1-6.
9. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the method of construction of any of claims 1 to 6.
CN202010690962.XA 2020-07-17 2020-07-17 Construction method, construction device, intelligent robot and readable storage medium Active CN111881239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010690962.XA CN111881239B (en) 2020-07-17 2020-07-17 Construction method, construction device, intelligent robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010690962.XA CN111881239B (en) 2020-07-17 2020-07-17 Construction method, construction device, intelligent robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN111881239A CN111881239A (en) 2020-11-03
CN111881239B true CN111881239B (en) 2023-07-28

Family

ID=73156035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010690962.XA Active CN111881239B (en) 2020-07-17 2020-07-17 Construction method, construction device, intelligent robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN111881239B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113761090B (en) * 2020-11-17 2024-04-05 北京京东乾石科技有限公司 Positioning method and device based on point cloud map
CN112729320B (en) * 2020-12-22 2022-05-17 中国第一汽车股份有限公司 Method, device and equipment for constructing obstacle map and storage medium
CN113313765B (en) * 2021-05-28 2023-12-01 上海高仙自动化科技发展有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113284240B (en) * 2021-06-18 2022-05-31 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113358132A (en) * 2021-06-30 2021-09-07 山东新一代信息产业技术研究院有限公司 Robot navigation map optimization method and system
CN113808142B (en) * 2021-08-19 2024-04-26 高德软件有限公司 Ground identification recognition method and device and electronic equipment
CN113758481A (en) * 2021-09-03 2021-12-07 Oppo广东移动通信有限公司 Grid map generation method, device, system, storage medium and electronic equipment
CN114415661B (en) * 2021-12-15 2023-09-22 中国农业大学 Planar laser SLAM and navigation method based on compressed three-dimensional space point cloud
CN114577216A (en) * 2022-03-31 2022-06-03 美智纵横科技有限责任公司 Navigation map construction method and device, robot and storage medium
CN114897895B (en) * 2022-07-12 2022-11-15 深圳市信润富联数字科技有限公司 Point cloud leveling method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997049A (en) * 2017-03-14 2017-08-01 奇瑞汽车股份有限公司 A kind of method and apparatus of the detection barrier based on laser point cloud data
CN109857123A (en) * 2019-03-21 2019-06-07 郑州大学 A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110134120A (en) * 2019-04-11 2019-08-16 广东工业大学 Paths planning method based on multi-line laser radar under a kind of non-structural road surface
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111381245A (en) * 2020-02-29 2020-07-07 天津大学 Laser SLAM self-adaptive resolution grid map construction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10788836B2 (en) * 2016-02-29 2020-09-29 AI Incorporated Obstacle recognition method for autonomous robots
CN108319655B (en) * 2017-12-29 2021-05-07 百度在线网络技术(北京)有限公司 Method and device for generating grid map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997049A (en) * 2017-03-14 2017-08-01 奇瑞汽车股份有限公司 A kind of method and apparatus of the detection barrier based on laser point cloud data
CN109857123A (en) * 2019-03-21 2019-06-07 郑州大学 A kind of fusion method of view-based access control model perception and the indoor SLAM map of laser acquisition
CN110134120A (en) * 2019-04-11 2019-08-16 广东工业大学 Paths planning method based on multi-line laser radar under a kind of non-structural road surface
CN110095786A (en) * 2019-04-30 2019-08-06 北京云迹科技有限公司 Three-dimensional point cloud based on a line laser radar ground drawing generating method and system
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111381245A (en) * 2020-02-29 2020-07-07 天津大学 Laser SLAM self-adaptive resolution grid map construction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目立体视觉栅格地图构建方法;王轩等;《软件》;20121115(第11期);全文 *
针对复杂环境的模块化栅格地图构建算法;秦玉鑫等;《控制工程》;20161020(第10期);1627-1633 *

Also Published As

Publication number Publication date
CN111881239A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN111881239B (en) Construction method, construction device, intelligent robot and readable storage medium
CN110673115B (en) Combined calibration method, device, equipment and medium for radar and integrated navigation system
EP3506212A1 (en) Method and apparatus for generating raster map
US9298992B2 (en) Geographic feature-based localization with feature weighting
US20190005667A1 (en) Ground Surface Estimation
CN112526993B (en) Grid map updating method, device, robot and storage medium
CN109506652B (en) Optical flow data fusion method based on carpet migration and cleaning robot
JP6649743B2 (en) Matching evaluation device and matching evaluation method
US20230251097A1 (en) Efficient map matching method for autonomous driving and apparatus thereof
CN111694358A (en) Method and device for controlling transfer robot, and storage medium
CN113768419B (en) Method and device for determining sweeping direction of sweeper and sweeper
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
Qingqing et al. Multi-modal lidar dataset for benchmarking general-purpose localization and mapping algorithms
CN114114367A (en) AGV outdoor positioning switching method, computer device and program product
CN114842106A (en) Method and apparatus for constructing grid map, self-walking apparatus, and storage medium
Li et al. Coarse-to-fine segmentation on lidar point clouds in spherical coordinate and beyond
CN113158779A (en) Walking method and device and computer storage medium
CN112782721A (en) Passable area detection method and device, electronic equipment and storage medium
US20220100201A1 (en) Information processing device and mobile robot
CN116429121A (en) Positioning method and device based on multiple sensors, self-mobile device and storage medium
CN114812539A (en) Map search method, map using method, map searching device, map using device, robot and storage medium
US20220221585A1 (en) Systems and methods for monitoring lidar sensor health
CN114660583A (en) Robot and repositioning method, device and medium thereof
JP7329079B2 (en) How to generate universally usable feature maps
US20220155455A1 (en) Method and system for ground surface projection for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant