GB2609849A - Map generation method and device, storage medium and processor - Google Patents

Map generation method and device, storage medium and processor Download PDF

Info

Publication number
GB2609849A
GB2609849A GB2216637.5A GB202216637A GB2609849A GB 2609849 A GB2609849 A GB 2609849A GB 202216637 A GB202216637 A GB 202216637A GB 2609849 A GB2609849 A GB 2609849A
Authority
GB
United Kingdom
Prior art keywords
point cloud
obstacle
traversable
zone
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2216637.5A
Other versions
GB202216637D0 (en
Inventor
Zeng Xiang
Li Xiang
Liu Mianli
Yuan Qing
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Publication of GB202216637D0 publication Critical patent/GB202216637D0/en
Publication of GB2609849A publication Critical patent/GB2609849A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Abstract

A map generation method and device, a storage medium, and a processor. The method comprises: acquiring a three-dimensional point cloud of a target building (S102); determining an obstacle area and a passable area of the target building according to the three-dimensional point cloud (S104); and generating an indoor map of the target building according to the obstacle area and the passable area (S106). The method solves the technical problem of the related technology in which low efficiency is caused by the fact that map building is required to generate a map.

Description

MAP GENERATION METHOD AND DEVICE, STORAGE MEDIUM AND
PROCESSOR
The present disclosure claims priority to a Chinese patent application No. 202010318680.7 filed on April 21, 2020 and entitled "map generation method and device, storage medium and processor", disclosure of which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
The present disclosure relates to the field of map generation and, in particular, a map generation method and device, a storage medium, and a processor.
BACKGROUND
At present, a robot primarily perceives and acclimates to the environment in the way of real-time positioning and mapping. That is, the robot acquires a map of the local environment nearby the position in which the robot is located through a sensor (such as a lidar), marks an occupied zone, an idle zone, and an unexplored zone on the map, then proceeds to the unexplored zone to further explore the environment, and expands the map of the local environment according to the acquired data, thereby completing the cognition and mapping of the whole environment in which the robot is located. However, building construction usually requires multiple robots of different functions to work alternately or cooperatively. In this case, the preceding method of real-time positioning and mapping has a series of problems: (1) Before starting operating on the scene (such as some floor under construction), each robot needs to traverse the entire floor to acquire the map and then starts to operate, greatly reducing the operation efficiency. (2) For the same environment (such as the same floor), different robots operating in may repeat the mapping work many times, resulting in a waste of resources. (3) Limited by factors such as the cost, the radar used for mapping is generally at a centimeter level or has relatively low precision.
In view of the preceding problems, no effective solution has been proposed yet. 25 SUMMARY Embodiments of the present disclosure provide a map generation method and device to at least solve the technical problem in the related art as follows: mapping requires to be performed before a map is generated so that the efficiency is low.
In an aspect of the embodiments of the present disclosure, a map generation method is provided. The method includes the steps below. A three-dimensional point cloud of a target building is acquired. An obstacle zone and a traversable zone of the target building are determined according to the three-dimensional point cloud. An indoor map of the target building is generated according to the obstacle zone and the traversable zone.
In some embodiments of the present disclosure, the step in which the obstacle zone of the target building is determined according to the three-dimensional point cloud includes the steps below. The three-dimensional point cloud is intercepted by a horizontal plane to acquire point cloud data of an obstacle corresponding to the horizontal plane. The point cloud data of the obstacle is projected on the horizontal plane so as to determine an obstacle contour of the obstacle on the horizontal plane. A corresponding obstacle grid picture is generated according to the obstacle contour to determine the obstacle zone.
In some embodiments of the present disclosure, the step in which the traversable zone of the target building is determined according to the three-dimensional point cloud includes the steps below. Point cloud data of the ground of the target building is determined according to the three-dimensional point cloud. A traversable contour is determined according to the point cloud data of the ground. A corresponding traversable grid picture is generated according to the traversable contour to determine the traversable zone.
In some embodiments of the present disclosure, before the step in which the corresponding obstacle grid picture is generated according to the obstacle contour, or the step in which the corresponding traversable grid picture is generated according to the traversable contour, the method includes the steps below. A new grid picture is established. The grid picture has a resolution of r, and the width iwidth and the height iheight of the grid picture are calculated by the formulas below.
iWidth = EXmax Xmin iheight - rYmax Ymini I r I tmax is the maximum value of X coordinates in an intersection set of point cloud data of the obstacle contour, and point cloud data of the traversable contour. Xmin is the minimum value of the X coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour. ymax is the maximum value of Y coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour. ymin is the minimum value of die Y coordinates in Me intersection set of die point cloud data of the obstacle contour, and the point cloud data of the traversable contour.
In some embodiments of the present disclosure, the step in which the corresponding obstacle grid picture is generated according to the obstacle contour includes the steps below. A quantity of point cloud data of the obstacle contour in each grid of the grid picture is determined according to the obstacle contour. In the case where the quantity in a grid does not exceed a preset quantity. the grid is determined as a first preset gray scale, and in the case where the quantity in a grid exceeds the preset quantity, the grid is determined as a second preset gray scale, so that the obstacle grid picture is generated. Moreover/alternatively, the step in which the corresponding traversable grid picture is generated according to the traversable contour includes the steps below. A quantity of point cloud data of the traversable contour in each grid of the grid picture is determined according to the traversable contour. In the case where the quantity in a grid does not exceed the preset quantity, the grid is determined as a first preset gray scale, and in the case where the quantity in a grid exceeds the preset quantity, the grid is determined as a second preset gray scale, so that the traversable grid picture is generated.
In some embodiments of the present disclosure, the step in which the point cloud data of the ground of the target building is determined according to the three-dimensional point cloud includes the steps below. A Z coordinate in a vertical direction of each point in the three-dimensional point cloud is calculated to determine a distribution of the Z coordinate of the each point cloud data in the three-dimensional point cloud. A Z coordinate value of the point cloud data of the ground is determined according to the distribution. A plurality of point cloud data satisfying the Z coordinate value in the three-dimensional point cloud is deteimined as the point cloud data of the ground.
In some embodiments of the present disclosure, the step in which die Z coordinate value of the point cloud data of the ground is determined according to the distribution includes the steps below. According to the maximum value and die minimum value of a Z coordinate of all point cloud data in the three-dimensional point cloud, and a preset interval length, the number of intervals of a distribution histogram and a set of point cloud data corresponding to each interval are determined to generate the distribution histogram. A lower quantile of the distribution histogram and a subscript of a interval in which the lower quantile is located are determined. A subscript of any interval of the intervals between a first peak value and a second peak value of the distribution histogram is determined according to the subscript of the interval in which the lower (pantile is located. A subscript corresponding to the first peak value and a subscript corresponding to the second peak value are determined according to the subscript of the any interval. A first Z coordinate value corresponding to the subscript of the first peak value and a second Z coordinate value corresponding to the subscript of the second peak value are compared. A smaller Z coordinate value of the first Z coordinate value and the second Z coordinate value is taken as a Z coordinate value of the ground.
In some embodiments of the present disclosure, the step in which the indoor map of the target building is generated according to the obstacle zone and the traversable zone includes the steps below. A door grid picture of a door contour in the traversable contour of the target building is determined according to the obstacle grid picture of the obstacle contour and the traversable grid picture of the traversable contour. Enclosed contours of the target building is determined according to the obstacle grid picture and the door grid picture. A boundary line of an indoor obstacle of the target building and an indoor and outdoor boundary line of the target building are determined according to the enclosed contours. An enclosed zone of the boundary line of the indoor obstacle, and an enclosed zone of the indoor and outdoor boundary line are marked separately to obtain the indoor map of the target building.
In some embodiments of the present disclosure, the step in which the door grid picture of the door contour of the target building is determined according to the obstacle grid picture of the obstacle contour, and the traversable grid picture of the traversable contour includes the steps below. The obstacle grid picture is processed to connect pixels of grids in the obstacle grid picture, and the traversable grid picture is processed to connect pixels of grids in the traversable grid picture. The traversable grid picture after pixel connection subtracts the obstacle grid picture after pixel connection to obtain the door grid picture of the door contour of the target building. The step in which the enclosed contours of the target building is determined according to the obstacle grid picture, the traversable grid picture, and the door grid picture includes the step below. The door grid picture, the obstacle grid picture, and the traversable grid picture are combined to generate the enclosed contours of the target building.
In some embodiments of the present disclosure, the step in which the boundary line of the indoor obstacle and the indoor and outdoor boundary line of the target building are determined according to the enclosed contours includes the steps below. An enclosed contour of an enclosed zone having the largest area in the enclosed contours of the target building is taken as the indoor and outdoor boundary line. An enclosed contour in the enclosed contours of the target building except the indoor and outdoor boundary line is taken as the boundary line of the indoor obstacle.
In some embodiments of the present disclosure, the step in which the enclosed zone of the boundary line of the indoor obstacle, and the enclosed zone of the indoor and outdoor boundary line are marked separately to obtain the indoor map of the target building includes the steps below. The interior of the enclosed zone of the boundary line of the indoor obstacle is determined as the obstacle zone. The obstacle zone is filled and marked by a first mark. The exterior of the enclosed zone of the indoor and outdoor boundary line is determined as an unexplored zone. The unexplored zone is filled and marked by a second mark. A zone in the interior of the enclosed zone of the indoor and outdoor boundary line except the interior of the enclosed zone of the boundary line of the indoor obstacle is determined as the traversable zone.
The traversable zone is filled and marked by a third mark. The obstacle zone marked by the first mark, the unexplored zone marked by the second mark, and the traversable zone marked by the third mark, and the enclosed contours constitute the indoor map of the target building.
In some embodiments of the present disclosure, before the step in which the obstacle zone and the traversable zone of the target building arc determined according to the three-dimensional point cloud, the method includes the steps below. A Z coordinate axis in a vertical direction of the three-dimensional point cloud is calibrated so that the positive direction of the Z coordinate axis of the three-dimensional point cloud is vertically upward. The three-dimensional point cloud is intercepted by a horizontal plane to determine point cloud data of a projection of a wall body of the target building corresponding to the three-dimensional point cloud on the horizontal plane. A straight line corresponding to any wall body is determined according to the point cloud data of the projection. A coordinate of the three-dimensional point cloud is rotated and adjusted according to a rotation angle of the straight line so that a horizontal axis of the target building represented by the three-dimensional point cloud is parallel to or perpendicular to a horizontal coordinate axis of the three-dimensional point cloud. The rotation angle is the included angle between the straight line and the horizontal coordinate axis of the three-dimensional point cloud.
The horizontal coordinate axis is an X coordinate axis or a Y coordinate axis. The X coordinate axis and the Y coordinate axis are perpendicular to each other.
In some embodiments of the present disclosure, the step in which the straight line corresponding to the any wall body is determined according to the point cloud data of the projection includes the step below. The straight line and the direction vector of the straight line are determined according to the point cloud data of the projection. The step in which the coordinate of the three-dimensional point cloud is rotated and adjusted according to the rotation angle of the straight line includes the steps below. The included angle between the direction vector, and the positive direction of the X coordinate axis or the positive direction of the Y coordinate axis is determined as the rotation angle. The three-dimensional point cloud is rotated and adjusted according to the rotation angle.
In another aspect of the embodiments of the present disclosure, a map generation device is also provided. The device includes a first determination module, a second detemiination module, and a generation module. The first determination module is configured to acquire a three-dimensional point cloud of a target building. The second determination module is configured to determine an obstacle zone and a traversable zone of the target building according to the three-dimensional point cloud. The generation module is configured to generate an indoor map of the target building according to the obstacle zone and the traversable zone.
In another aspect of the embodiments of the present disclosure, a storage medium is also provided. The storage medium is characterized in that: the storage medium includes a program stored in the storage medium. When the program is executed, a device in which the storage medium is located is controlled to perform any one of the methods described above.
In another aspect of the embodiments of the present disclosure, a processor is also provided.
The processor is characterized in that: the processor is configured to execute a program. When the program is executed, the processor performs the method of any one of the embodiments described above.
In the embodiments of the present disclosure, the three-dimensional point cloud of the target building is acquired. The obstacle zone of the target building and the traversable zone of the target building are determined according to the three-dimensional point cloud. The map is generated according to the obstacle zone and the traversable zone. In this manner, the obstacle zone and the traversable zone of the target building are directly determined by die three-dimensional point cloud, and thus the indoor map of the target building is generated, thereby avoiding the robot from autonomously mapping, and thereby providing the indoor map of the target building directly generated by the three-dimensional point cloud for the robot to use. With this configuration, the map generation efficiency is improved, and solve the technical problem in the related art as follows: mapping is required to be performed for the robort before a map is generated so that the efficiency is low.
BRIEF DESCRIPTION OF DRAWINGS
The drawings described herein are used to provide a further understanding of the present disclosure, and form a part of the present application. The example embodiments and descriptions thereof in the present disclosure are used to explain the present disclosure and do not limit the present disclosure in any improper way. In the drawings: FIG. 1 is a flowchart of a map generation method according to an embodiment of the present disclosure.
FIG. 2 is a flowchart of a map generation method according to an embodiment of the present disclosure.
FIG. 3 is a flowchart of a method for generating a map according to an obstacle zone and a traversable zone according to an embodiment of the present disclosure.
FIG. 4-1 is a schematic view of an obstacle grid picture according to an embodiment of the present disclosure.
FIG. 4-2 is a schematic view of a traversable grid picture according to an embodiment of the
present disclosure.
FIG. 5-1 is a schematic view of an obstacle grid picture after pixel connection according to an embodiment of the present disclosure.
FIG. 5-2 is a schematic view of a traversable grid picture after pixel connection according to an embodiment of the present disclosure.
FIG. 6 is a schematic view of a door grid picture according to an embodiment of the present disclosure.
FIG. 7 is a schematic view of enclosed contours according to an embodiment of the present disclosure.
FIG. 8-1 is a schematic view of an indoor and outdoor boundary line according to an
embodiment of the present disclosure.
FIG. 8-2 is a schematic view of a boundary line of an indoor obstacle according to an
embodiment of the present disclosure.
FIG. 9 is a schematic view of a map according to an embodiment of the present disclosure.
FIG. 10 is a schematic diagram of a map generation device according to an embodiment of the present disclosure.
DETAILED DESCRIPTION
For a better understanding of the solutions by those skilled in the art, the solutions in embodiments of the present disclosure will be described clearly and completely in conjunction with the drawings in the embodiments of the present disclosure. Apparently, the embodiments below are part, not all, of the embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative work are within the scope of the present disclosure.
It is to be noted that the terms "first", and "second" in the description, claims, and preceding drawings of the present disclosure are used to distinguish between similar objects and are not necessarily used to describe a particular order or sequence. It should be understood that the data used in this way is interchangeable when appropriate so that embodiments of the present disclosure described herein can also be implemented in a sequence not illustrated or described herein, hi addition, the terms "comprising", "including", or any other variations thereof herein are intended to encompass a non-exclusive inclusion. For example, a process, method, system, product, or device that includes a series of steps or elements not only includes the expressly listed steps or elements but may also include other steps or elements that are not expressly listed or are inherent to such process, method, system, product, or device.
According to an embodiment of the present disclosure, a map generation method is provided. It is to be noted that the steps illustrated in the flowcharts in the drawings may be performed by a computer system such as a group of computers capable of executing instructions. Moreover, although logical sequences are illustrated in the flowcharts, the illustrated or described steps may be performed in sequences different from those described herein in some cases.
As shown in FIG. 1, FIG. 1 is a flowchart of a map generation method according to an embodiment of the present disclosure. The method includes the steps below.
In step S102, a three-dimensional point cloud of a target building is acquired.
In step S104, an obstacle zone and a traversable zone of the target building are determined according to the three-dimensional point cloud.
In step S106, an indoor map of the target building is generated according to the obstacle zone and the traversable zone.
Through these steps, the three-dimensional point cloud of the target building is acquired. The obstacle zone and the traversable zone of the target building are determined according to the three-dimensional point cloud. The map is generated according to the obstacle zone and the traversable zone.
In this manner, the obstacle zone and the traversable zone of the target building are directly determined by the three-dimensional point cloud, and thus the indoor map of the target building is generated, thereby achieving the purpose that the robot is avoided from autonomously mapping, and the indoor map of the target building directly is generated by the three-dimensional point cloud for the robot to use. With this configuration, the technical effect of improving the map generation efficiency is achieved, and the technical problem as follows in the related art is solved: mapping is required to be performed for the robort before a map is generated so that the efficiency is low.
Terrestrial three-dimensional laser scanning technology continuously develops, the scanning speed and precision have been greatly improved, and thus the laser scanner can be used for quickly acquiring point cloud data of high-precision (in millimeters) of the entire floor or even the entire building. In this manner, the three-dimensional point cloud data is acquired. The three-dimensional point cloud data has various purposes, such as being configured for the construction progress tracking. If the construction robot acquires the three-dimensional point cloud on the scene before entering the scene and operating, the three-dimensional point cloud may also be configured for producing a high-precision map of the scene.
The three-dimensional coordinate system is taken as a reference, points of the target building are collected, three-dimensional coordinates corresponding to point cloud data in the three-dimensional coordinate system are determined, so as to form the three-dimensional point cloud. The three-dimensional coordinate system includes an X coordinate axis and a Y coordinate axis that are located in a horizontal plane and perpendicular to each other, and a Z coordinate axis in the vertical direction.
The obstacle zone and the traversable zone are determined according to the three-dimensional point cloud, and the map is generated according to the obstacle zone and the traversable zone so that the map of the target building is generated according to the three-dimensional point cloud for other devices to use, thereby solving the following problem in the related art: mapping is required to be performed before the map is generated so that the efficiency is low.
The obstacle zone may be an obstacle zone within a certain height range so that when using the map, the robot or the device using the map can detect the obstacle zone by a detection apparatus at this height, the map is more practical, and for the device using the map, the precision of the map is higher.
In some embodiments of the present disclosure, the step in which the obstacle zone of the target building is determined according to the three-dimensional point cloud includes the steps below. The three-dimensional point cloud is intercepted by the horizontal plane to acquire point cloud data of an obstacle corresponding to the horizontal plane. The point cloud data of the obstacle is projected on the horizontal plane to determine an obstacle contour of the obstacle on the horizontal plane. A corresponding obstacle grid picture is generated according to the obstacle contour to determine the obstacle zone.
The horizontal plane may be a horizontal plane of a preset height. The preset height may be the detection height of the detection apparatus of the device using the map, such as the setting height of an infrared detector of the robot.
The horizontal plane may include a certain height tolerance. The three-dimensional point cloud within the range of the height tolerance is acquired by the horizontal plane. The three-dimensional point cloud corresponds to the target building. The three-dimensional point cloud is intercepted by the horizontal plane to obtain an intersection line of a wall body of the target building or another construction obstacle on the horizontal plane. The intersection line is projected on the horizontal plane so that the contour of the obstacle can be determined.
In some embodiments of the present disclosure, the height of the horizontal plane from the ground is s, and the height tolerance is 6, so a point set Cobst having a height of s + 78 from the ground is Cobst = Zci E Zpo S,Zfloor + 5 + i E [1, An} Cob" is projected on the horizontal plane Z = zpom. to obtain an intersection line point set C obst including contours of all obstacles of the target building at the height of the horizontal plane. The contour of the obstacle still exists in the form of point cloud data and is not intuitive enough. Therefore, the corresponding obstacle grid picture is generated by the obstacle contour so that the obstacle contour can he displayed on the grid picture, facilitating generating the map according to the grid picture, thereby improving the map generation efficiency.
In some embodiments of the present disclosure, the step in which the traversable zone of the target building is determined according to the three-dimensional point cloud includes the steps below. Point cloud data of the ground of the target building is determined according to the three-dimensional point cloud. A traversable contour is determined according to die point cloud data of the ground. A corresponding traversable grid picture is generated according to die traversable contour to determine the traversable zone.
The traversable zone may he a traversable zone inside the target building. The traversable contour may be a contour of a point cloud of the ground of the target building. The traversable zone can be deleimined according to the traversable contour. The traversable zone may be a traversable zone of the target building on the ground and determined based on the ground. Therefore, a ground point set can be determined first, and the contour of the traversable zone can be determined from the ground point set. Specifically, a point set Cpoor °having a height of + -8 from the ground is acquired. -2 -zp°°' + i E [1, N11 CM° r 0 = (C.1 [Zfioor 2 2 According to a random sample consensus (RANSAC) algorithm, the ground point set Cflo" is determined from the point set Cp",. o. The traversable contour of the traversable zone is obtained from Cpo,". by a point cloud boundary extraction algorithm.
The traversable contour still exists in the form of point cloud data and is not intuitive enough. Therefore, the corresponding traversable grid picture is generated by the traversable contour so that the traversable contour can be displayed on the grid picture, facilitating generating the map according to the grid picture, thereby improving the map generation efficiency.
In some of the embodiments of the present disclosure, before the step in which the corresponding obstacle grid picture is generated according to the obstacle contour, or the step in which the corresponding traversable grid picture is generated according to the traversable contour, the method includes the step below. A new grid picture is established. The grid picture has a resolution of r, and the width iwielth and the height iheight of the grid picture are calculated by the formulas below. Xrain
iwidth = Xmax )max r-Ymin1 iheight -xmax is the maximum value of X coordinates in an intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour. xmin is the minimum value of the X coordinates in die intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour. ym",, is the maximum value of Y coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour. ymin is the minimum value of the Y coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour.
For the step in which the obstacle grid picture is generated according to the obstacle contour, and the step in which the traversable grid picture is generated according to the traversable contour, it is needed to establish the new grid picture first. The point cloud data of the obstacle contour is corresponded to the grid picture so as to generate the obstacle grid picture. The point cloud data of the traversable contour is corresponded to the grid picture so as to generate the traversable grid picture.
The resolution of r may correspond to grids in the grid picture, and one grid corresponds to one pixel, thereby associating the coordinate value of the point cloud data with the resolution of the grid picture. In this manner, the width of the grid picture can be determined by the ratio of the difference between the maximum value and the minimum value of the X coordinates to the resolution, and the height of the grid picture can be determined by the ratio of the difference between the maximum value and the minimum value of the Y coordinates to the resolution, thereby effectively determining the size of the obstacle grid picture and the size of the traversable grid picture, and ensuring that the established grid picture can completely accommodate the obstacle contour and/or the traversable contour.
In some embodiments of the present disclosure, the step in which the corresponding obstacle grid picture is generated according to the obstacle contour includes the steps below. A quantity of point cloud data of the obstacle contour in each grid of the grid picture is determined according to the obstacle contour. In the case where the quantity in a grid does not exceed a preset quantity, the grid is determined as a first preset gray scale, and in the case where the quantity in a grid exceeds the preset quantity, the grid is determined as a second preset gray scale, so that the obstacle grid picture is generated. Moreover/alternatively, the step in which the corresponding traversable grid picture is generated according to the traversable contour includes the steps below. A quantity of point cloud data of the traversable contour in each grid of the grid picture is determined according to the traversable contour. In the case where the quantity in a grid does not exceed the preset quantity, the grid is determined as a first preset gray scale, and in the case where the quantity in a grid exceeds the preset quantity, the grid is determined as a second preset gray scale, so that the traversable grid picture is generated.
After the new grid picture is established, the point cloud data of the obstacle contour is corresponded to the grid picture, grids in which the quantity of point cloud data exceeds the preset quantity are displayed in the second preset gray scale, so as to be distinguished from other grids displayed in the first preset gray scale. Generally, grids having no quantity of point cloud data or grids in which the quantity of point cloud data does not exceed the preset quantity are displayed in a gray scale value of 0. That is, the first preset gray scale may be 0, that is, black. The second preset gray scale may be any value. In this embodiment, in order to show the difference, a relatively high gray scale of 255 is used. That is, white forms a sharp contrast with the black in other grids, facilitating the user's intuitive feeling.
The display color of the traversable contour is similar to that of the obstacle contour described above and will not be repeated herein.
In some embodiments of the present disclosure, contour grid picture of the obstacle zone is initialized to a gray scale image of iwidth x iheight pixels and having a gray scale value of 0. A gray scale value gij of any pixel in an ith row and jth column (i E [1, iwidth], j E [1, iheight]) is calculated by the formula below.
{0, < nthres hold 255, my threshold In the formula, mii is the number of points in the point set. C win falling within a circle search zone which lakes a point cu corresponding to the pixel center point as the center of the circle, and takes the given positive number radius as the search radius. The neighbor point search can be achieved by a common method for constructing kdTree. nth"shoid is a relatively small positive integer. When the number of points in the search zone is not less than rtthreshold, the pixel gray value may be set to 255.
The x coordinate and the y coordinate of the point cif are obtained by the formulas below.
= xmin + -)r / lb YCL-j = For the contour grid picture of the traversable zone, the same process as above is used for obtaining the gray scale picture, but the target point set for searching is C'100. instead of C'ob".
In some embodiments of the present disclosure, the step in which the point cloud data of the ground of the target building is determined according to the three-dimensional point cloud includes the steps below. A Z coordinate in a vertical direction of each point in the three-dimensional point cloud is calculated to determine a distribution of the Z coordinate of the each point cloud data in the three-dimensional point cloud. A Z coordinate value of the point cloud data of the ground is determined according to the distribution. A plurality of point cloud data satisfying the Z coordinate value in the three-dimensional point cloud is determined as the point cloud data of the ground.
The ceiling of the target building can be identified while the ground of the target building is identified in the three-dimensional point cloud. Specifically, the ceiling and the ground of the target building in the three-dimensional point cloud can he determined by the Z coordinate in the three-dimensional point cloud. Specifically, the Z coordinate in the vertical direction of each point in the three-dimensional point cloud is calculated to determine the ceiling and the ground of the target building in the three-dimensional point cloud. Z coordinate values of points of the ceiling and the ground are determined according to the distribution. Points satisfying the range of the Z coordinates are determined as the points of the ceiling or the ground.
The distribution of the Z coordinate of the point cloud is calculated to estimate the Z coordinate values of the ceiling and the ground. Here, it is assumed that the Z coordinate axis of the input point cloud has been calibrated to the vertical direction, and the positive direction is vertically upward. According to the indoor point cloud characteristics of the building, it is known that when the indoor ground and ceiling are substantially horizontal, the point cloud density has two peak values in the height direction. One peak value is located near the ground and the other peak value is located near the ceiling. According to this assumption, the Z coordinate values of the ceiling and the ground can be obtained by obtaining the peak value of the point cloud density in the height direction, thereby determining the points denoting the ceiling and the ground in the three-dimensional point cloud.
In some embodiments of the present disclosure, the step in which the Z coordinate value of the point cloud data of the ground is determined according to the distribution includes the steps below. According to the maximum value and the minimum value of a Z coordinate of all point cloud data in the three-dimensional point cloud and a preset interval length, the number of intervals of a distribution histogram and a set of point cloud data corresponding to each interval arc determined to generate the distribution histogram. A lower quantile of the distribution histogram, and a subscript of a interval in which the lower quantile is located are determined. A subscript of any interval of the intervals between a first peak value and a second peak value of the distribution histogram is determined according to the subscript of the interval in which the lower quantile is located. A subscript corresponding to the first peak value and a subscript corresponding to the second peak value are determined according to the subscript of the any interval. A first Z coordinate value corresponding to the subscript of the first peak value and a second Z coordinate value corresponding to the subscript of the second peak value are compared. A smaller Z coordinate value of the first Z coordinate value and the second Z coordinate value is taken as a Z coordinate value of the ground.
In the step in which according to the maximum value and the minimum value of Z coordinates of all point cloud data in the three-dimensional point cloud, and the preset interval length, the number of intervals of the distribution histogram, and the set of point cloud data corresponding to each interval are determined to generate the distribution histogram, the length of the each interval of the distribution histogram is d, and the number of intervals of the distribution histogram is n. Ezm" n -
In the formula, zmin denotes the minimum value of the Z coordinates of all the points in the three-dimensional point cloud. znia, denotes the maximum value of the Z coordinates of all the points in the three-dimensional point cloud. The symbol H denotes rounding up. A set of points included in any interval j is 1-17j 11,1 = E [Z"", (j -1)d, + jd), i E [1, ] E [1, nil The number of points included in the any interval j is hzi symbol I I denotes the number of elements in the set. 1. In the formula, the The subscript of any interval of the intervals between the first peak value and the second peak value of the distribution histogram is determined according to the subscript of the interval in which the lower quantile is located. The subscript corresponding to the first peak value and the subscript corresponding to the second peak value are determined according to the subscript of the any interval. The first Z coordinate value corresponding to the subscript of the first peak value and the second Z coordinate value corresponding to the subscript of the second peak value are compared. The smaller Z coordinate value of the first Z coordinate value and the second Z coordinate value is taken as the Z coordinate value of the ground.
The lower a quantile of the distribution of the Z coordinates of the three-dimensional point cloud is denoted as Z. a E (0, 1). One point ci is randomly selected in the three-dimensional point IS cloud. ci satisfies the formula below.
P(z za) = a The subscript of the interval of the distribution histogram in which the za is located is set to be indexa.indexa may be determined by the formula below.
1=1 < Na} The histogram has two peak values. The two peak values include the I irst peak value and the second peak value. The subscript of a certain interval between the two peak values of the 20 histogram may be estimated by the formula below.
Index +Index 1, Indexmiddie Then, subscripts corresponding to the two peak values of the histogram are: Iindex°, :=,-, max index E [1,n index indexpo" argmax 11,1 j E[Linde x imddie) index ",this = argmax h. 1 E Un de x in] In the formula, 11,1 denotes the number of points included in the interval j of the distribution histogram. n denotes the number of intervals of the distribution histogram. The Z coordinate value Zn", of the ground, and the Z coordinate value Z"iiing of the ceiling are calculated by the formulas below.
Zfloor = Z,,(indexpo 1 + " --2 d Z ceiling = Z min + (index"iiing - d In the formula, Zmin denotes the minimum value of the Z coordinates of all the points in the three-dimensional point cloud. d denotes the length of each interval of the distribution histogram.
a may be a relatively small positive number, such as 0.05.
In some embodiments of the present disclosure, the step in which the indoor map of the target building is generated according to the obstacle zone and the traversable zone includes the steps below. A door grid picture of a door contour in the traversable contour of the t'-get building is determined according to the obstacle grid picture of the obstacle contour and the traversable grid picture of the traversable contour. Enclosed contours of the target building are determined according to the obstacle grid picture, and the door grid picture. A boundary line of an indoor obstacle of the target building, and an indoor and outdoor boundary line of the target building are determined according to the enclosed contours. An enclosed zone of the boundary line of the indoor obstacle and an enclosed zone of the indoor and outdoor boundary line are marked separately to obtain the indoor map of the target building.
The obstacle contour may include a wall body of the target building of a preset height and a contour of another obstacle. The traversable zone contour includes a traversable zone contour of the ground. A door contour of the obstacle can be detemiined according to the obstacle contour and the traversable zone contour. The door contour may be the door contour in the traversable zone, i.e., a contour of a door on the building wall, which may cause the obstacle contour of the obstacle zone to break off, on the wall of the building. In this embodiment, the target building may be a building whose main body has just been built, and a map is necessary for the robot to operate. The door on the wall in the main body of the building is traversable by using the map. When the obstacle contour of the obstacle zone is formed, the contour of the wall where a door is disposed is configured to be connected to the contour of the outer wall of the building so that the obstacle contour of the obstacle zone does not break up, thereby generating the enclosed contour of the target building by overlaying the obstacle contour and the door contour. The specific manner is that the door grid picture is determined through the obstacle grid picture and the traversable grid picture so that a grid picture of the enclosed contour of the target building is determined according to the obstacle grid picture and the door grid picture. The boundary line of the indoor obstacle of the target building, and the indoor and outdoor boundary line of the target building are determined according to the enclosed contour so that the indoor map of the target building is generated.
The indoor and outdoor boundary line may be a boundary line between the indoor and the outdoor of the target building, such as enclosed contours composed of walls, doors, and windows. The boundary line of the indoor obstacle of the target building may be a boundary line of an enclosed obstacle that is not in contact with the indoor and outdoor boundary line within the target building, such as a boundary line of an indoor freestanding pillar. The obstacle in contact with the indoor and outdoor boundary line forms part of the indoor and outdoor boundary line. For example, a pillar disposed inside the wall is shown indoors as an external corner, causing the indoor and outdoor boundary line to form an external corner. The boundary line of the pillar intersects the indoor and outdoor boundary line, that is, forms part of the indoor and outdoor boundary line.
In some embodiments of the present disclosure, the step in which the door grid picture of the door contour of the target building is determined according to the obstacle grid picture of the obstacle contour and the traversable grid picture of the traversable contour includes the steps below. The obstacle grid picture is processed to connect pixels of grids in the obstacle grid picture, and the traversable grid picture is processed to connect pixels of grids in the traversable grid picture. The traversable grid picture after pixel connection subtracts the obstacle grid picture after pixel connection to obtain the door grid picture of the door contour of the target building. The step in which the enclosed contours of the target building are determined according to the obstacle grid picture, the traversable grid picture, and the door grid picture includes the step below. The door grid picture, the obstacle grid picture, and the traversable grid picture are combined to generate the enclosed contours of the target building.
In the step in which the obstacle grid picture and the traversable grid picture are processed, the contour of the obstacle zone and the contour of the traversable zone may be processed by a morphological close operation to connect pixels of the obstacle contour, and pixels of the traversable contour so that the obstacle contour of the obstacle grid picture forms a complete line and the traversable contour of the traversable grid picture forms a complete line.
The obstacle grid picture, the traversable grid picture, and the door grid picture are combined to determine the enclosed contours of the target building.
In some embodiments of the present disclosure, die step in which the boundary line of the indoor obstacle, and the indoor and outdoor boundary line of the target building are determined according to the enclosed contours includes the steps below. An enclosed contour of an enclosed zone having the largest area in the enclosed contours is taken as the indoor and outdoor boundary line. An enclosed contour in the enclosed contours except the indoor and outdoor boundary line is taken as the boundary line of the indoor obstacle.
In the step of performing a contour extraction on the enclosed contour image to determine the indoor and outdoor boundary line, the contour extraction may be performed on the enclosed contour image. Polygons enclosed by the extracted contours are arranged in descending order of the areas. The contour in the first place (that is, having the largest area) is taken as the indoor and outdoor boundary line. Other contours are taken as boundary lines of the indoor obstacles.
In some embodiments of the present disclosure, the step in which the enclosed zone of the boundary line of the indoor obstacle, and the enclosed zone of the indoor and outdoor boundary line are marked separately to obtain the indoor map of the target building includes the steps below. The interior of the enclosed zone of the boundary line of the indoor obstacle is determined as the obstacle zone. The obstacle zone is filled and marked by a first mark. The exterior of the enclosed zone of the indoor and outdoor boundary line is determined as an unexplored zone. The unexplored zone is filled and marked by a second mark. A zone in the interior of the enclosed zone of the indoor and outdoor boundary line except the interior of the enclosed zone of the boundary line of the indoor obstacle is determined as the traversable zone. The traversable zone is filled and marked by a third mark. The obstacle zone marked by the first mark, the unexplored zone marked by the second mark, and the traversable zone marked by the third mark, and the enclosed contours constitute the indoor map of the target building.
The first mark, the second mark, and the third mark may have different colors or filling images. Taking color as an example, the indoor obstacle zone is acquired and filled with the color of the first mark. Specifically, the obstacle zone is filled with white (that is, a gray scale value of 255) inside to obtain an image gi. The indoor traversable zone is acquired and filled with the color of the third mark. Specifically, the traversable zone may be the indoor and outdoor boundary line.
After the indoor and outdoor boundary line is filled with white inside, an inverse selection operation is performed on the image to obtain an image 22. The zone except the obstacle zone and the traversable zone is the unexplored zone, and the contour of the unexplored zone is acquired and filled with the color of the second mark. gi and g2 may be combined to obtain an image gs. A set P1 of all white pixels in gs denotes the unexplored zone. After a morphological dilation operation is performed on 1 pixel in gs. A set P2 of all black pixels (that is, a gray scale value of 0) denotes the traversable zone. A set P3 of all white pixels in gobst denotes the obstacle zone.
Then, a gray scale image g4 having iwidth x iheight pixels and a gray scale value of 0 is initialized, the gray scale value gij of the any pixel pii in the ith row and jth column (i E [1, iwidth],j e [1, iheight]) is calculated by the formula below.
(grey, pti g Pi = 255, pii E P2 0, pij E P3 In the formula, gray E (0,255) is a given integer, denotes the unexplored zone in gray, and may be 128. The obtained grid map g4 is the map.
In some embodiments of the present disclosure, before the step in which the obstacle zone and the traversable zone of the target building are determined according to the three-dimensional point cloud, the method includes the steps below. The Z coordinate axis in the vertical direction of the three-dimensional point cloud is calibrated so that the positive direction of the Z coordinate axis of the three-dimensional point cloud is vertically upward. The three-dimensional point cloud is intercepted by a horizontal plane to determine point cloud data of a projection of a wall body of the target building corresponding to the three-dimensional point cloud on the horizontal plane. A straight line corresponding to any wall body is determined according to the point cloud data of the projection. A coordinate of the three-dimensional point cloud is rotated and adjusted according to a rotation angle of the straight line so that a horizontal axis of the target building represented by the three-dimensional point cloud is parallel to or perpendicular to a horizontal coordinate axis of the three-dimensional point cloud. The horizontal axis of the target building includes the real X coordinate axis and the real Y coordinate axis of the target building. The rotation angle is the included angle between the straight line and the horizontal coordinate axis of the three-dimensional point cloud. The horizontal coordinate axis is the X coordinate axis or the Y coordinate axis. The X coordinate axis and the Y coordinate axis are perpendicular to each other.
The three-dimensional point cloud in the three-dimensional coordinate system may have the same orientations as the real target building in the real space. Specifically, the three-dimensional coordinate system of the three-dimensional point cloud is the same as the three-dimensional coordinate system of the target building. That is, the X coordinate axis of the three-dimensional point cloud and the X coordinate axis of the target building are parallel and have the same positive direction. The Y coordinate axis of the three-dimensional point cloud and the Y coordinate axis of the target building are parallel and have the same positive direction. The Z coordinate axis of the three-dimensional point cloud and the Z coordinate axis of the target building are parallel and have the same positive direction. However, in most cases, an included angle exists between the three-dimensional coordinate system of the three-dimensional point cloud and the three-dimensional coordinate system of the target building.
Therefore, in the process of generating the three-dimensional point cloud, the case in which the three-dimensional coordinate stated above does not coincide with the three-dimensional coordinate system of the real target building may exist, resulting in the case in which the generated X coordinates of the three-dimensional point cloud do not coincide with the X coordinate axis of the three-dimensional coordinate system, and the generated Y coordinates of the three-dimensional point cloud do not coincide with the Y coordinate axis of the three-dimensional coordinate system so that the amount of calculation is large. Therefore, it is needed to adjust the coordinate of each point of the three-dimensional point cloud so that the horizontal axis of the target building is parallel to or perpendicular to the coordinate axis, thereby improving the operation efficiency.
In the step of adjusting the coordinate of the each point of the three-dimensional point cloud, the Z coordinate axis in the vertical direction of the three-dimensional point cloud may be calibrated first so that the positive direction of the Z coordinate axis of the three-dimensional point cloud is vertically upward, and then the X coordinate axis and the Y coordinate axis in the horizontal plane of the three-dimensional point cloud are adjusted.
In some embodiments of the present disclosure, one straight line in an intersection line point set of the three-dimensional point cloud and the horizontal plane is acquired. The coordinate of the each point of the three-dimensional point cloud is rotated and adjusted according to the rotation angle of the straight line. The rotation angle is the angle between the straight line and the horizontal coordinate axis.
The three-dimensional point cloud is rotated and transformed so that the direction of the X coordinate axis of the target building represented by the three-dimensional point cloud is parallel or perpendicular to the X coordinate axis of the three-dimensional coordinate system of the three-dimensional point cloud, and the direction of the Y coordinate axis of the target building represented by the three-dimensional point cloud is parallel or perpendicular to the Y coordinate axis of the three-dimensional coordinate system of the three-dimensional point cloud.
In general, the coordinate adjustment can be accomplished by performing a principal components analysis on the three-dimensional point cloud. However, the problem that the axis direction of the three-dimensional coordinate system of the three-dimensional point cloud after the adjustment is not parallel (or perpendicular) to the X coordinate axis or the Y coordinate axis of the target building still exists. In this embodiment, the point cloud coordinate adjustment may be performed by the method below. The three-dimensional point cloud is intercepted by one horizontal plane to obtain intersection lines that are mainly wall bodies of the target building. The main indoor wall bodies of the target building are substantially perpendicular to each other and projections thereof on the horizontal plane are straight lines so that the three-dimensional point cloud coordinate adjustment can be accomplished by rotating the intersection lines parallel to or perpendicular to the X coordinate axis or the Y coordinate axis.
The straight line and the direction vector of the straight line are determined according to the point cloud data of the projection. The step in which the coordinate of the three-dimensional point cloud is rotated and adjusted according to the rotation angle of the straight line includes the steps below. The included angle between the direction vector and the positive direction of the X coordinate axis or the positive direction of the Y coordinate axis is determined as the rotation angle. The three-dimensional point cloud is rotated and adjusted according to the rotation angle.
In some embodiments of die present disclosure, the step in which the straight line of the intersection line of the three-dimensional point cloud is acquired includes the steps below.
The height of the horizontal plane from the ground is s, and the height tolerance is 6, so a point set C °tut having a height of s + -2 from the ground is Cobst = {Ci knoor S Zfloor S, E [1N 1J Cob" is projected on the horizontal plane Z = zn"r to obtain the intersection line point set C ob". According to the intersection line point set C obst, one straight line and the direction vector of the straight line are determined by the RANSAC algorithm.
The step in which the coordinate of the three-dimensional point cloud is rotated and adjusted according to the rotation angle of the straight line includes that the angle between the vector 17and the positive direction of the X coordinate axis is O. For any point ci in the three-dimensional point cloud, a homogeneous coordinate before the rotation and adjustment is a, and a homogeneous coordinate after the rotation and adjustment is a'. The coordinate after the rotation and adjustment can be obtained by a' = Ta. In the formula, T is a transformation matrix and is obtained by the formula below.
[cos -sin 1 T= sin cos The any point ci in the three-dimensional point cloud is rotated and adjusted by the preceding formula to adjust the three-dimensional point cloud.
In addition, the present disclosure also provides an embodiment that is described in detail below.
With the disappearance of the population dividend and the aggravation of population aging in China, labor costs are constantly rising. In particular, in the construction industry, due to the complicated situation of the construction scene and the dangerous and hostile working environment, the shortage of labor in construction positions becomes more and more severe. That the construction robot is applied to the construction scene is a key solution to the preceding problem. The construction robot is required to perceive the scene environment on the premise that construction robot can normally move and operate on the construction scene. Therefore, the map of the scene is of great significance for guiding die robot to a designated working position.
In recent years, the terrestrial three-dimensional laser scanning technology continuously evolves, the scanning speed and precision have been greatly improved, and thus practical conditions for quickly acquiring point cloud data of high-precision (in millimeters) of the entire floor or even the entire building by the laser scanner have been possessed. The point cloud data has various purposes, such as being configured for the construction progress tracking. If the construction robot acquires the high-precision point cloud of the scene are acquired before entering the scene and operating, the high-precision point cloud may also be configured for producing a high-precision map of the scene.
The main contents of this embodiment are to generate a two-dimensional grid map from an indoor three-dimensional point cloud. The two-dimensional grid map is configured for providing various types of robots to guide the robots to move inside the building.
In this embodiment, first, the three-dimensional point cloud is processed to obtain an obstacle contour of an obstacle and a traversable contour of a traversable zone. Then, according to the obstacle contour and the traversable contour, a grid map is generated. The resolution of the map may be any value not higher than the resolution of the three-dimensional point cloud.
This embodiment can save the mapping work of the robot before the operation, save the mapping time, and reduce the repetitive work. Moreover, if the used point cloud can achieve precision in millimeters, the generated map can also achieve a precision in millimeters, supporting different requirements of various indoor robots.
As shown in FIG.2, FIG. 2 is a flowchart of a map generation method according to an embodiment of the present disclosure. The flowchart for generating the two-dimensional grid map according to the three-dimensional point cloud as shown in FIG. 1 includes four steps below. I. The ground and the ceiling are identified. 2. Coordinates of the three-dimensional point cloud are adjusted. 3. The obstacle contour of the obstacle and the traversable contour of the traversable zone are acquired. 4. The two-dimensional grid map is generated.
Specifically, in step 1 in which The ground and the ceiling are identified, the distribution of Z coordinates of the point cloud is calculated to estimate Z coordinate values of the ceiling and the ground. Here, it is assumed that a Z coordinate axis of the input three-dimensional point cloud has been calibrated to the vertical direction, and the positive direction of the Z coordinate axis is vertically upward. According to the indoor point cloud characteristics of the building, it is known that when the indoor ground and ceiling are substantially horizontal, the point cloud density has two peak values in the height direction. One peak value is located near the ground and the other peak value is located near the ceiling. According to this assumption, the Z coordinate values of the ceiling and the ground can be obtained by obtaining the peak values of the point cloud density in the height direction, the steps are described below.
1)A distribution histogram of the Z coordinates of the point cloud is acquired.
It is assumed that the input point cloud C includes N points, for any point L.,: among the N points, Ci E C. the Z coordinate of the any point ci is zci, the length of each interval of the distribution histogram is d (d > 0). and the number of intervals of the distribution histogram is rmax Zmn il Zmax = max z, E [1, Ar] Zmin = min zci, i E [1, AI] In the formula (1), the symbol [1 denotes rounding up. A set of points included in any interval j is Hzi = {C, E [Znun -1)d, z,"" + jd), E [1, N],] E [1, nil (2) The number of points included in any interval j (that is, the frequency) is hzi = Hi (3) In the formula (3), the symbol I I denotes the number of elements in the set.
2) The Z coordinate estimated values of the ground and the ceiling are is calculated.
The lower cc quantile of the distribution of the Z coordinates of the three-dimensional point cloud is denoted as z,. That is, the given a e (0, 1). One point ci is randomly selected in the three-dimensional point cloud. ci satisfies the formula below.
P(zci 5 42) = a (4) It is assumed that the subscript of the interval of the distribution histogram in which the z" is located is indexoc.indexcimay be estimated by the fon-nula below. z
index°, max {index E [1,n] vi,ndexn Na} (5) According to the assumption, the histogram has two peak values. The subscript of a certain 20 interval between the two peak values of the histogram may be estimated by the formula below.
Index,+Index 1_ Index middle [ 2 cc], 06 E (0, 1) (6) In the formula (6), a may be a relatively small positive number, such as 0.05.
Then, subscripts corresponding to the two peak values of the histogram are: indern00r = argmaxi E[1,inde xmiddie (7) index"iting = argmaxi E [inde XIILICidte,n] (8) (7)(8)The Z coordinate estimated value Z"iiing of die ceiling and the Z coordinate estimated value Zno" of the ground are calculated by the formulas below.
lo or = Zmin (i.71.deXpoon --2) d ( 9) Zceiling = Zmin (bide Xsolling --2) d (10) (9)(10)In step 2 in which coordinates of the three-dimensional point cloud are adjusted, the three-dimensional point cloud is rotated and transformed so that the axis direction of the three-dimensional point cloud representing the target building is parallel or perpendicular to the X coordinate axis or the Y coordinate axis. In general, the coordinate adjustment can be accomplished by performing a principal components analysis on the three-dimensional point cloud. However, the following problem still exists: the adjusted axis direction is still not parallel (or perpendicular) to the X coordinate axis or the Y coordinate axis of the target building. In this embodiment, the three-dimensional point cloud coordinate adjustment may be performed by the method below. The three-dimensional point cloud is intercepted by one horizontal plane to obtain intersection lines that are mainly wall bodies. It is assumed that the main indoor wall bodies of the target building are substantially perpendicular to each other, and projections thereof on the horizontal plane are straight lines, so that the three-dimensional point cloud coordinate adjustment can he accomplished by rotating the intersection lines to he parallel to or perpendicular to the X coordinate axis or the Y coordinate axis. The specific steps are as given below.
1) An intersection line set of the three-dimensional point cloud and the horizontal plane, and one straight line in the intersection line set is acquired.The height of the horizontal plane from the ground is s, and the height tolerance is 6, so a point set Cobst having a height of s + -s from the ground is obtained Cobst = {C E -7,zpo" +S +)i E [1, Nil (I I) Cobst is projected on the horizontal plane Z = Zfl 007 to obtain the point set e obst. For the intersection line point set Cobs, , one two-dimensional straight line in the intersection line point set and the direction vector of the straight line are obtained by a universal random sample consensus (RANSAC) algorithm.
2)A rotation angle is acquired to rotate and adjust the point cloud coordinates.
It is assumed that the angle between the vector and the X coordinate axis is 0. For any point ci in the three-dimensional point cloud, a homogeneous coordinate before the rotation and adjustment is a, and a homogeneous coordinate after the rotation and adjustment is a'. The coordinate after the rotation and adjustment can be obtained by the formula below.
a' = Ta (12) In the formula (12), T is a transformation matrix and is obtained by the formula below.
[cos 0 -sin 0 sin 8 cos A The any point c1 in the three-dimensional point cloud is rotated and adjusted by the formula (12).
In step 3 in which the obstacle contour of the obstacle and the traversable contour of the traversable zone are acquired, the map has two key messages, that is, an occupied zone (or the obstacle zone) and an idle zone (or the traversable zone).
The two key messages can be obtained by obtaining contours of obstacles such as a wall, a pillar, and a bay window sill, and the range of traversable places on the ground in the three-dimensional point cloud. The specific steps are as given below.
1) The obstacle contour of the obstacle zone is acquired. According to the description of the step 2, the intersection line point set C obst obtained in the step 1) of the step 2 after the rotation and adjustment is the three-dimensional point cloud composed of the contours of the obstacles at a given height s.
2) The traversable contour of the traversable zone is acquired. A point set Cpc," °having a height of + -5 from the ground is acquired by the formula below. -2 25) .00 7"...n = kfloor -7,zp, +i c [1,M} (14) (T= 1 (13) If the ground point set is C poor, C poor g C pooro. A planar model can be obtained from Cfi,"0 by the universal RANSAC algorithm, and points inside the model is the ground point set Cfi",.0.
A common point cloud boundary extraction algorithm can be used. For example, a ground contour C"floor is obtained from the Cpoor based on through the point cloud boundary extraction based on the threshold evaluation of the vectorial angle of k-nearest neighbor points, that is, the traversable contour of the traversable zone.
3) According to the contour of the three-dimensional point cloud, an obstacle grid picture of the obstacle contour of the obstacle zone and a traversable grid picture of the traversable contour of the traversable zone are obtained. The output resolution is assumed to be r. That is, one pixel in the grid picture represents that the actual distance is r, and then the width /width and the height iheight of the grid picture are calculated by the formulas below.
iwidth -1x1"' -x11'1 (15) 7" iheight -1311' -Yini"1 (16) r In the formulas (15) and (16), x","," = maxxcyCi E Ci ob" U C poor XMi71 = minx, c1 E ohn U Cp00, ; = more,Ye E obst U Cpoor and minc,Yci, Ci E QS U C' floor in the formulas, xc, and are the x coordinate and the y coordinate of the point ci, respectively.
The obstacle grid picture of the obstacle contour of the obstacle zone is initialized to a gray scale image of iwidth x iheight pixels and having a gray scale value of 0. A gray scale value g of any pixel in an ith row and jth column (i E [1, iwidth], j e[1, iheight]) is calculated by the formula below.
/0, mu < nthres hold gii = -t255, n2,7 nthres hold In the formula. mii is the number of points in the point set Cob" falling within a circle search zone that takes the point cij corresponding to the pixel center point as the center of the circle and the given positive number radius as the search radius. The nearest-neighbor point search can be achieved by a common method for constructing kdTree. nthres hold is a relatively small positive integer. When the number of points in the search zone is not less than nthres hold the pixel gray value may be set to 255. The x coordinate and the y coordinate of the point cii are obtained by the formulas below. (17)
x = ;tun ± (. 1) = Ymax For the traversable grid picture of the traversable contour and the traversable zone, the same process as above is used for obtaining the gray scale picture, but the searched target point set is C' floor Thor instead of C obst.
In step 4 in which the two-dimensional grid map is generated, after the obstacle grid picture of the obstacle contour of the obstacle zone, and the traversable grid picture of the traversable contour of the traversable zone are acquired, a series of image processing is performed to obtain the map. FIG. 3 is a flowchart of a method for generating a map according to an obstacle zone and a traversable zone according to an embodiment of the present disclosure. As shown in FIG. 3, the specific steps are as given below.
1) FIG. 4-1 is a schematic view of an obstacle grid picture according to an embodiment of the present disclosure. FIG. 4-2 is a schematic view of a traversable grid picture according to an embodiment of the present disclosure. FIG. 5-1 is a schematic view of an obstacle grid picture after pixel connection according to an embodiment of the present disclosure. FIG. 5-2 is a schematic view of a traversable grid picture after pixel connection according to an embodiment of the present disclosure. A universal morphological close operation of expanding and then eroding is performed separately on the obstacle grid picture of the obstacle contour of the obstacle zone (as shown in FIG. 4-1) and the traversable grid picture of the traversable contour of the traversable zone (as shown in FIG. 4-2) to connect discontinuous contour pixels to obtain an image gt,b,,, (as shown in FIG. 5-1) and an image gn,", (as shown in FIG. 5-2) 2) FIG. 6 is a schematic view of a door grid picture according to an embodiment of the present disclosure. FIG. 7 is a schematic view of an enclosed contours according to an embodiment of the present disclosure. As shown in FIGS.6 and 7, the picture of the traversable contour of the traversable zone poor subtracts the picture of the obstacle contour of the obstacle zone gobs, to obtain the door contour (as shown in FIG. 6), and then the door contour is combined with the image gobs, to obtain a complete enclosed contour image (as shown in FIG. 7).
3) FIG. 8-1 is a schematic view of an indoor and outdoor boundary line according to an embodiment of the present disclosure. FIG. 8-2 is a schematic view of a boundary line of an indoor obstacle according to an embodiment of the present disclosure. As shown in FIGS. 8-1 and 8-2, a contour extraction is performed on the image obtained in the preceding step.
Moreover, polygons enclosed by the extracted contours are arranged in descending order of the area. The contour in the first place (that is, the contour having the largest area) is taken as an indoor and outdoor boundary line (as shown in FIG. 8-1). Other contours are taken as boundary lines of the indoor obstacles (as shown in FIG. 8-2).
4) The boundary lines of the indoor obstacles are filled with white (that is, a gray scale value of 255) inside to obtain an image gi. After the indoor and outdoor boundary line is filled with white inside, an inverse selection operation is performed on the enclosed contour image to obtain an image g). 21 and g,are combined to obtain an image g3. A set P1 of all white pixels in 23 denotes the unexplored zone. After a morphological dilation operation is performed on one pixel in g3,a set 132 of all black pixels (that is, a gray scale value of 0) denotes the traversable zone. A set P3 of all white pixels in gohst denotes the obstacle zone.
5) A gray scale image g4 having iwidthx iheight pixels and a gray scale value of 0 is initialized, the gray scale value gii of the any pixel pi/ in the ith row and jth column (i E [1, iwidth],j iheight1) is calculated by the formula below.
(grey, Pij E P1 = 255, pu E P2 (18) 0, Pu E P3 In the formula, gray E (0,255) is a given integer, denotes the unexplored zone that is in gray, and may be 128. As shown in FIG.9, FIG. 9 is a schematic view of a map according to an embodiment of the present disclosure, the obtained grid map g4 is the map.
FIG. 10 is a schematic diagram of a map generation device according to an embodiment of the present disclosure. As shown in FTG.10, in another aspect of the embodiments of the present disclosure, a map generation device is provided. The device includes a first determination module 1002, a second determination module 1004, and a generation module 1006. The device is described in detail below.
The first determination module 1002 is configured to acquire a three-dimensional point cloud of a target building. The second determination module 1004 is connected to the first determination module 1002, and is configured to determine an obstacle zone and a traversable zone of the target building according to the three-dimensional point cloud. The generation module 1006 is connected to the second determination module 1004, and is configured to generate an indoor map of the target building according to the obstacle zone and the traversable zone.
Through the device, the three-dimensional point cloud of the target building is determined by the first determination module 1002. The obstacle zone and the traversable zone of the target building are determined according to the three-dimensional point cloud by the second determination module 1004. The indoor map of the target building is generated according to the obstacle zone and the traversable zone by the generation module 1006. In this manner, the obstacle zone and the traversable zone of the target building are directly determined by the three-dimensional point cloud, thereby generating the indoor map of the target building, avoiding the robot from autonomous mapping, and providing the map directly generated by the three-dimensional point cloud for the robot to use, improving the map generation efficiency, and solving the following technical problem in the related art: the robot is required to map first before the map is generated so that the efficiency is low.
In another aspect of the embodiments of the present disclosure, a storage medium is also provided. The storage medium includes a program stored in the storage medium. When the program is executed, a device in which the storage medium is located is controlled to perform thc method of any one of the embodiments described above.
In another aspect of the embodiments of the present disclosure, a processor is also provided.
The processor is configured to execute a program. When the program is executed, the processor performs the method of any one of the embodiments described above.
The serial numbers of the embodiments described above in the present disclosure are merely for description and do not indicate superiority or inferiority of the embodiments.
In the embodiments described above of the present disclosure, the description of each embodiment has its own emphasis. For a part not described in detail in a certain embodiment. reference may be made to a related description of other embodiments.
It should be understood that the technical contents disclosed in the embodiments of the present disclosure may be implemented in other ways. The device embodiments described above are merely exemplary. For example, the unit classification may be a logical function classification, and, in practice, the unit classification may be implemented in other ways. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. Additionally, the presented or discussed mutual coupling, direct coupling, or communication connections may be indirect coupling or communication connections via interfaces, units, or modules, or may be electrical or in other forms.
The units described as separate components may or may not be physically separated. Components presented as units may or may not be physical units, that is, may be located in one place or may be distributed on multiple units. Part or all of these units may be selected according to practical requirements to achieve the objects of the solutions in the embodiments
of the present disclosure.
Additionally, various functional units in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may be physically present separately, or two or more units may be integrated into one unit. The integrated unit may be implemented by hardware or by a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if implemented in the form of the software functional unit and sold or used as an independent product. Based on this understanding, the solutions provided by the present invention substantially, or the part contributing to the existing art, may be embodied in the form of a software product. The computer software product is stored on a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps in the methods provided by the embodiments of the present disclosure. The above storage medium may be a U disk, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, an optical disk, or another medium capable of storing program codes.
The above are merely preferred embodiments of the present disclosure. It is to be noted that for those skilled in the art, several improvements and modifications may be made without departing from the principle of the present disclosure, and these improvements and modifications are within the scope of the present disclosure.

Claims (16)

  1. What is claimed is: 1. A map generation method, comprising: acquiring a three-dimensional point cloud of a target building; determining an obstacle zone and a traversable zone of the t'-get building according to the three-dimensional point cloud; and generating an indoor map of the target building according to the obstacle zone and the traversable zone.
  2. 2. The method according to claim 1, wherein determining the obstacle zone of the target building according to the three-dimensional point cloud comprises: intercepting the three-dimensional point cloud by a horizontal plane to acquire point cloud data of an obstacle corresponding to the horizontal plane; projecting the point cloud data of the obstacle on the horizontal plane to determine an obstacle contour of the obstacle on the horizontal plane; and generating a corresponding obstacle grid picture according to the obstacle contour to determine the obstacle zone.
  3. 3. The method according to claim 2, wherein determining the traversable zone of the target building according to the three-dimensional point cloud comprises: determining point cloud data of a ground of the target building according to the three-dimensional point cloud; determining a traversable contour according to the point cloud data of the ground; and generating a corresponding traversable grid picture according to the traversable contour to determine the traversable zone.
  4. 4. The method according to claim 3, before generating the corresponding obstacle grid picture according to the obstacle contour, or generating the corresponding traversable grid picture according to the traversable contour, the method further comprises: establishing a new grid picture, wherein the grid picture has a resolution of r, and a width iwithh and a height iheight of the grid picture are calculated by formulas below: iwidth -Exn,a, -xmini iheight -rmax Y min 1 wherein xrn is a maximum value of X coordinates in an intersection set of point cloud data of the obstacle contour, and point cloud data of the traversable contour; xmin is a minimum value of the X coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour; ymax is a maximum value of Y coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour; and ymin is a minimum value of the Y coordinates in the intersection set of the point cloud data of the obstacle contour, and the point cloud data of the traversable contour.
  5. 5. The method according to claim 4, wherein generating the corresponding obstacle grid picture according to the obstacle contour comprises: determining a quantity of point cloud data of the obstacle contour in each grid of the grid picture according to the obstacle contour, in a case where the quantity in a grid does not exceed a preset quantity, determining the grid as a first preset gray scale, and in a case where the quantity in a grid exceeds the preset quantity, determining the mid as a second preset gray scale, so that the obstacle grid picture is generated; and/or wherein generating the corresponding traversable grid picture according to the traversable contour comprises: determining a quantity of point cloud data of the traversable contour in each grid of the grid picture according to the traversable contour, in a case where the quantity in a grid does not exceed the preset quantity, determining the grid as a first preset gray scale, and in a case where the quantity in a grid exceeds the preset quantity, determining the grid as a second preset gray scale, so that traversable grid picture is generated.
  6. 6. The method according to claim 3, wherein determining the point cloud data of the ground of the target building according to the three-dimensional point cloud comprises: calculating a Z coordinate in a vertical direction of each point in the three-dimensional point cloud to determine a distribution of the Z coordinate of the each point cloud data in the three-dimensional point cloud; determining a Z coordinate value of the point cloud data of the ground according to the distribution; and determining a plurality of point cloud data satisfying the Z coordinate value in the three-dimensional point cloud as the point cloud data of the ground.
  7. 7. The method according to claim 6, wherein determining the Z coordinate value of the point cloud data of the ground according to the distribution comprises: according to a maximum value and a minimum value of a coordinate of all point cloud data in the three-dimensional point cloud and a preset interval length, determining a number of intervals of a distribution histogram and a set of point cloud data corresponding to each interval to generate the distribution histogram; to determining a lower quantile of die distribution histogram, and a subscript of a interval in which the lower quantile is located; determining a subscript of any interval of the intervals between a first peak value and a second peak value of the distribution histogram according to the subscript of the interval in which the lower quantile is located, and determining a subscript corresponding to the first peak value and a subscript corresponding to the second peak value according to the subscript of the any interval; and comparing a first Z coordinate value corresponding to the subscript of the first peak value and a second Z coordinate value corresponding to the subscript of the second peak value, and taking a smaller Z coordinate value of the first Z coordinate value and the second Z coordinate value as a 20 Z coordinate value of the ground.
  8. 8. The method according to claim 5, wherein generating the indoor map of the target building according to the obstacle zone and the traversable zone comprises: determining a door grid picture of a door contour in the traversable contour of the target building according to the obstacle grid picture of the obstacle contour and the traversable grid picture of the traversable contour; determining enclosed contours of the target building according to the obstacle grid picture and the door grid picture; determining a boundary line of an indoor obstacle of the target building, and an indoor and outdoor boundary line of the target building according to the enclosed contours; and marking an enclosed zone of the boundary line of the indoor obstacle, and an enclosed zone of the indoor and outdoor boundary line separately to obtain the indoor map of the target building.
  9. 9. The method according to claim 8, wherein determining the door grid picture of the door contour of the target building according to the obstacle grid picture of the obstacle contour, and the traversable grid picture of the traversable contour comprises: processing the obstacle grid picture to connect pixels of grids in the obstacle grid picture, and processing the traversable grid picture to connect pixels of grids in the traversable grid picture; and subtracting the obstacle grid picture after pixel connection, from the traversable grid picture after pixel connection to obtain the door grid picture of the door contour of the target building; 10 and wherein determining the enclosed contours of the target building according to die obstacle grid picture, the traversable grid picture, and the door grid picture comprises: combining the door grid picture, the obstacle grid picture, and the traversable grid picture to generate the enclosed contours of the target building
  10. 10. The method according to claim 8, wherein determining the boundary line of the indoor obstacle of the target building, and the indoor and outdoor boundary line of the target building according to the enclosed contours comprises: taking an enclosed contour of an enclosed zone having a largest area in the enclosed contours as the indoor and outdoor boundary line; and taking an enclosed contour except the indoor and outdoor boundary line in the enclosed contours, as the boundary line of the indoor obstacle.
  11. 11. The method according to claim 8. wherein marking the enclosed zone of the boundary line of the indoor obstacle, and the enclosed zone of the indoor and outdoor boundary line separately to obtain the indoor map of the target building comprises: determining an interior of the enclosed zone of the boundary line of the indoor obstacle as the obstacle zone, and filling and marking the obstacle zone by a first mark; determining an exterior of the enclosed zone of the indoor and outdoor boundary line as an unexplored zone, and filling and marking the unexplored zone by a second mark; and determining a zone in an interior of the enclosed zone of the indoor and outdoor boundary line except the interior of the enclosed zone of the boundary line of the indoor obstacle as the traversable zone, and filling aml marking the traversable zone by a third mark, wherein the obstacle zone marked by the first mark, the unexplored zone marked by the second mark, and the traversable zone marked by the third mark, and the enclosed contours constitute the indoor map of the target building.
  12. 12. The method according to claim 1, before determining the obstacle zone and the traversable zone of the target building according to the three-dimensional point cloud, the method comprises: calibrating a Z coordinate axis in a vertical direction of the three-dimensional point cloud so that a positive direction of the Z coordinate axis of the three-dimensional point cloud is vertically upward; intercepting the three-dimensional point cloud by a horizontal plane to determine point cloud data of a projection of a wall body of the target building corresponding to the three-dimensional point cloud on the horizontal plane; determining a straight line corresponding to any wall body according to the point cloud data of the projection; rotating and adjusting a coordinate of the three-dimensional point cloud according to a rotation angle of the straight line so that a horizontal axis of the target building represented by the three-dimensional point cloud is parallel to or perpendicular to a horizontal coordinate axis of the three-dimensional point cloud, wherein the rotation angle is an included angle between the straight line and the horizontal coordinate axis of the three-dimensional point cloud, and the horizontal coordinate axis is an X coordinate axis or a Y coordinate axis, wherein the X coordinate axis and the Y coordinate axis are perpendicular to each other.
  13. 13. The method according to claim 12, wherein determining the straight line corresponding to the any wall body according to the point cloud data of the projection comprises: determining the straight line and a direction vector of the straight line according to the point cloud data of the projection; rotating and adjusting the coordinate of the three-dimensional point cloud according to the rotation angle of the straight line comprises: determining an included angle between the direction vector, and a positive direction of the X coordinate axis or a positive direction of the Y coordinate axis as the rotation angle; and rotating and adjusting the three-dimensional point cloud according to the rotation angle.
  14. 14. A map generation device, comprising: a first determination module configured to acquire a three-dimensional point cloud of a target building; a second determination module configured to determine an obstacle zone and a traversable zone of the target building according to the three-dimensional point cloud; and a generation module configured to generate an indoor map of the target building according to the obstacle zone and the traversable zone.
  15. 15. A storage medium comprising: a program stored in the storage medium, when the program is executed, a device in which the storage medium is located is controlled to perform the method of any one of claims 1 to 13.
  16. 16. A processor, configured to execute a program, wherein when the program is executed, the processor perform the method of any one of clai ins 1 to 13.
GB2216637.5A 2020-04-21 2020-12-11 Map generation method and device, storage medium and processor Pending GB2609849A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010318680.7A CN113538671B (en) 2020-04-21 2020-04-21 Map generation method, map generation device, storage medium and processor
PCT/CN2020/135878 WO2021212875A1 (en) 2020-04-21 2020-12-11 Map generation method and device, storage medium and processor

Publications (2)

Publication Number Publication Date
GB202216637D0 GB202216637D0 (en) 2022-12-21
GB2609849A true GB2609849A (en) 2023-02-15

Family

ID=78093978

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2216637.5A Pending GB2609849A (en) 2020-04-21 2020-12-11 Map generation method and device, storage medium and processor

Country Status (5)

Country Link
JP (1) JP2023522262A (en)
CN (1) CN113538671B (en)
AU (1) AU2020444025A1 (en)
GB (1) GB2609849A (en)
WO (1) WO2021212875A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069006A (en) * 2021-11-01 2023-05-05 速感科技(北京)有限公司 Map optimization method, map optimization device, electronic equipment and storage medium
CN116518987A (en) * 2022-01-24 2023-08-01 追觅创新科技(苏州)有限公司 Map processing method, system and self-mobile device
CN114818051A (en) * 2022-03-24 2022-07-29 香港大学深圳研究院 Indoor three-dimensional barrier-free map generation method based on LiDAR point cloud and BIM collision simulation
CN115381354A (en) * 2022-07-28 2022-11-25 广州宝乐软件科技有限公司 Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment
CN115033972B (en) * 2022-08-09 2022-11-08 武汉易米景科技有限公司 Method and system for unitizing building main body structures in batches and readable storage medium
CN115423933B (en) * 2022-08-12 2023-09-29 北京城市网邻信息技术有限公司 House type diagram generation method and device, electronic equipment and storage medium
CN116224367A (en) * 2022-10-12 2023-06-06 深圳市速腾聚创科技有限公司 Obstacle detection method and device, medium and electronic equipment
CN116538953B (en) * 2023-05-08 2024-01-30 武汉纵横天地空间信息技术有限公司 Intelligent detection method and system for elevation targets and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199558A (en) * 2016-08-18 2016-12-07 宁波傲视智绘光电科技有限公司 Barrier method for quick
CN106997049A (en) * 2017-03-14 2017-08-01 奇瑞汽车股份有限公司 A kind of method and apparatus of the detection barrier based on laser point cloud data
CN108984741A (en) * 2018-07-16 2018-12-11 北京三快在线科技有限公司 A kind of ground drawing generating method and device, robot and computer readable storage medium
US20190287254A1 (en) * 2018-03-16 2019-09-19 Honda Motor Co., Ltd. Lidar noise removal using image pixel clusterings
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4645601B2 (en) * 2007-02-13 2011-03-09 トヨタ自動車株式会社 Environmental map generation method and mobile robot
JP4999734B2 (en) * 2008-03-07 2012-08-15 株式会社日立製作所 ENVIRONMENTAL MAP GENERATION DEVICE, METHOD, AND PROGRAM
JP5546998B2 (en) * 2010-08-19 2014-07-09 Kddi株式会社 3D map creation method and apparatus
CN110286387B (en) * 2019-06-25 2021-09-24 深兰科技(上海)有限公司 Obstacle detection method and device applied to automatic driving system and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199558A (en) * 2016-08-18 2016-12-07 宁波傲视智绘光电科技有限公司 Barrier method for quick
CN106997049A (en) * 2017-03-14 2017-08-01 奇瑞汽车股份有限公司 A kind of method and apparatus of the detection barrier based on laser point cloud data
CN110274602A (en) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 Indoor map method for auto constructing and system
US20190287254A1 (en) * 2018-03-16 2019-09-19 Honda Motor Co., Ltd. Lidar noise removal using image pixel clusterings
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN108984741A (en) * 2018-07-16 2018-12-11 北京三快在线科技有限公司 A kind of ground drawing generating method and device, robot and computer readable storage medium

Also Published As

Publication number Publication date
CN113538671A (en) 2021-10-22
CN113538671B (en) 2024-02-13
WO2021212875A1 (en) 2021-10-28
GB202216637D0 (en) 2022-12-21
AU2020444025A1 (en) 2022-12-15
JP2023522262A (en) 2023-05-29

Similar Documents

Publication Publication Date Title
GB2609849A (en) Map generation method and device, storage medium and processor
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
EP1895472B1 (en) System and method for 3D radar image rendering
CN104732587A (en) Depth sensor-based method of establishing indoor 3D (three-dimensional) semantic map
CN104297758B (en) A kind of auxiliary berthing device and its method based on 2D pulse type laser radars
CN111610494B (en) VTS radar configuration signal coverage optimization method
EP4283567A1 (en) Three-dimensional map construction method and apparatus
CN109215071A (en) The intelligent harvester for both rice and wheat swath measurement method of view-based access control model
CN108510587A (en) A kind of indoor and outdoor environmental modeling method and system based on 2D laser scannings
CN111612806B (en) Building facade window extraction method and device
CN108217045A (en) A kind of intelligent robot for undercarriage on data center&#39;s physical equipment
CN111354007B (en) Projection interaction method based on pure machine vision positioning
CN107632305B (en) Autonomous sensing method and device for local submarine topography based on profile sonar scanning technology
CN114926739A (en) Unmanned collaborative acquisition and processing method for underwater and overwater geographic spatial information of inland waterway
Mason et al. Textured occupancy grids for monocular localization without features
CN106504228B (en) A kind of a wide range of high definition rapid registering method of ophthalmology OCT image
CN111458691A (en) Building information extraction method and device and computer equipment
CN108550134B (en) Method and device for determining map creation effect index
CN111694009B (en) Positioning system, method and device
CN113723389A (en) Method and device for positioning strut insulator
US5553214A (en) System for delineating and annotating areal regions
Rausch et al. Stationary LIDAR sensors for indoor quadcopter localization
CN113902828A (en) Construction method of indoor two-dimensional semantic map with corner as key feature
CN114782357A (en) Self-adaptive segmentation system and method for transformer substation scene
Monsieurs et al. Collision avoidance and map construction using synthetic vision