CN113888691A - Method, device and storage medium for building scene semantic map construction - Google Patents

Method, device and storage medium for building scene semantic map construction Download PDF

Info

Publication number
CN113888691A
CN113888691A CN202010636372.9A CN202010636372A CN113888691A CN 113888691 A CN113888691 A CN 113888691A CN 202010636372 A CN202010636372 A CN 202010636372A CN 113888691 A CN113888691 A CN 113888691A
Authority
CN
China
Prior art keywords
point cloud
building
unit
semantic
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010636372.9A
Other languages
Chinese (zh)
Inventor
孟浩
张建
侠宇威
曾泽江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dajie Robot Technology Co ltd
Original Assignee
Shanghai Dajie Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dajie Robot Technology Co ltd filed Critical Shanghai Dajie Robot Technology Co ltd
Priority to CN202010636372.9A priority Critical patent/CN113888691A/en
Publication of CN113888691A publication Critical patent/CN113888691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Abstract

The invention provides a method for constructing a semantic map of a building scene, which comprises the steps of carrying out three-dimensional reconstruction on a building space, carrying out point cloud segmentation on point clouds obtained by the three-dimensional reconstruction, and carrying out feature calculation and recording on the segmented point clouds in each shape; extracting information of a window or a door; and aligning the coordinate systems of the building model and the point cloud model for semantic matching, and finally performing semantic labeling to form a point cloud semantic map or a rasterized semantic map. The invention fully utilizes the prior knowledge given by the BIM and combines the prior knowledge with the scanned point cloud data to generate the semantic map. The method is simple and quick, has higher stability, and can be used for solving the problems of great difficulty and great application value by using a simple method beyond expectation.

Description

Method, device and storage medium for building scene semantic map construction
Technical Field
The invention relates to the technical field of building scene semantic map construction methods.
Background
The semantic map is a very hot topic in the field of the current robot, and the semantic map can enable the robot to really understand the surrounding environment of the robot and autonomously make decisions and actions according to the surrounding environment, rather than only relying on a mode and an algorithm set by human in advance.
For example, previous robots only detected nearby obstacles by sensors and thus mechanically avoided the obstacles. After the semantic map is available, the robot can know whether the nearby obstacle is a person or a table, so that different reactions are performed, if the obstacle is a person, a prompt is given, and for safety, the robot continues to execute tasks after the person leaves; if the desk is directly bypassed.
Most of the existing semantic map technologies are based on the combination of deep learning and point cloud processing algorithms, the algorithms of the methods are poor in stability, large in calculation amount and high in hardware cost, and the methods are not mature enough and have a certain distance from the real ground.
The building scenes are built based on the BIM, and although a certain deviation exists between the actual built size and the pre-designed size, the deviation is not too large as long as the building scene meets the building specification, and the deviation can be guaranteed to be within a certain range.
Disclosure of Invention
The invention provides a method, a device and a storage medium for constructing a semantic map of a building scene, which are used for at least solving one or more problems in the prior art.
In a first aspect, the embodiment of the invention provides a method for constructing a semantic map of a building scene, which comprises the steps of carrying out three-dimensional reconstruction on a building space, carrying out point cloud segmentation on point clouds obtained by the three-dimensional reconstruction, and carrying out feature calculation and recording on the segmented point clouds in each shape; extracting information of a window or a door; and aligning the coordinate systems of the building model and the point cloud model for semantic matching, and finally performing semantic labeling to form a point cloud semantic map or a rasterized semantic map.
In combination with the embodiment of the method for constructing the semantic map of the building scene, the three-dimensional reconstruction of the building space refers to scanning and collecting point cloud data of the whole room by using a 3D camera, other sensors capable of acquiring the point cloud data or a scanning device. As one of the realizable methods, a six-degree-of-freedom mechanical arm is adopted, a 3D camera is fixed at the tail end of the mechanical arm, and the 3D camera is used for capturing point cloud data in real time; or by using the second realizable method, a wheel-type mobile robot is used for carrying the hardware platform in the first method to collect point cloud data of a larger scale (the whole room).
As one of the realizable methods, scanning and collecting point cloud data of the whole room for three-dimensional reconstruction, the method further comprises the following steps:
a. in order to reduce the calculated amount, the scanned point cloud is down-sampled by using a voxel filter, so that the original data amount is reduced;
b. converting the scanned point cloud from a scanning device coordinate system to a robot arm base coordinate system to prepare for point cloud splicing;
c. performing single-pass filtering and outlier filtering on the point cloud data again to filter some noises and outliers;
d. when the robot arm moves to another adjacent point, capturing the point cloud again, repeating the work of a-c, and splicing the previously collected point cloud under the coordinate system of the robot arm;
e. and finishing the three-dimensional reconstruction after scanning and splicing all the point clouds in the reachable range.
This patent "near point", refer to two adjacent stay points of robot arm, two frames of point clouds can have partial coincidence, can just go together according to the concatenation of coordinate relation, the specific degree of closing on is according to the visual angle size of camera or scanning instrument.
As a second method, a wheeled mobile robot is used to carry the hardware platform in the first method to collect point cloud data of a larger scale (whole room), and the method further comprises the following steps:
1) after the mechanical arm scans all point clouds in the current reachable range, the chassis drives the mechanical arm to move to the adjacent position for next mechanical arm scanning, and the operation of the method a-e is repeated;
2) after scanning, splicing the point clouds again on a larger scale, wherein the splicing method comprises the steps of firstly transferring all the point clouds to a room coordinate system according to the positioning of a mobile chassis in a room, carrying out rough registration on the point clouds based on the chassis positioning of the room coordinate system, then carrying out fine registration by using an iterative closest point method (ICP), and splicing after the registration;
3) and after the mobile robot carries the mechanical arm to scan the point cloud of the whole room, finishing the three-dimensional reconstruction.
The embodiment of the method for constructing the semantic map of the building scene, which is used for carrying out point cloud segmentation on the point cloud reconstructed from the three-dimensional reconstruction, comprises the following steps:
a. for the point cloud reconstructed in three dimensions, a random sample consensus (RANSAC) algorithm is used for segmentation, and different shapes are segmented according to requirements each time;
b. and performing outlier filtering on all the segmented point cloud units to remove redundant noise.
The common shapes in buildings are planes, cylinders and the like, for example, walls, house beams, square columns are all formed by one or a plurality of planes, and pipelines or round columns are all formed by cylinders. Therefore, the point cloud data reconstructed in three dimensions can be divided into a plurality of common planes and a plurality of common cylinders, and other shapes can be also divided according to the needs. For the cylinder, the three-dimensional coordinates of the origin of the cylinder (the central point of a certain circular surface can be selected), the length and the radius of the circular surface are obtained, and the data are recorded. The plane in the building is horizontal, flat and vertical, but not necessarily rectangular, and may be a pattern formed by splicing a plurality of rectangles, so that all the planes divided by the partitions are disassembled into one or a plurality of rectangular or approximately rectangular planes. And performing normal vector estimation on each rectangle after the disassembly, and finding and recording four vertex coordinates of the rectangle. And performing feature calculation and recording on the point clouds in other shapes. According to the method, all the point cloud data can be divided and processed.
For the embodiment of the method for constructing the semantic map of the building scene, the method for extracting the information of the window or the door comprises the following steps:
a. extracting the outline of the extracted plane point cloud;
b. carrying out Euclidean clustering on all the extracted contours;
c. carrying out outlier filtering on the clustering result to eliminate the noise of the clustering result;
d. a small rectangular outline contained in a large planar outline is picked out, and coordinates of four vertexes of the outline are recorded and marked as a mosaic component.
In the embodiment of the method for constructing the semantic map of the building scene, the semantic matching is carried out on the coordinate system of the aligned building model and the point cloud model, and the method comprises the following steps:
a. selecting a certain vertex angle in a room, and aligning the building model and the coordinate system of the point cloud model;
b. traversing all the rectangular planes obtained by point cloud segmentation, and matching every two rectangular planes in the building model; one of the principles of achievable matching is: the Euclidean distances of four vertex coordinates of the two rectangles are smaller than a given threshold, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold;
c. traversing all the embedded components, and matching with a rectangular plane in the building model; one of the principles of matching that can be achieved is the same as b;
d. traversing all the cylinders obtained by point cloud segmentation, and matching every two cylinders with the cylinders in the building model; one of the principles of achievable matching is: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
e. and traversing point clouds in other shapes, and matching the point clouds with the building model until all the point clouds are matched.
The given threshold value generally refers to the maximum tolerance error value between the architectural design model and the actual building design model. The given threshold value described in this patent may be selected by one skilled in the art based on the technical solution and the desired result provided by this patent.
In the embodiment of the method for constructing the semantic map of the building scene, the semantic annotation forms the point cloud semantic map, and the method comprises the following steps:
a. inquiring the shape matched with the point cloud in the building model, inquiring the category of the shape, and recording the category;
b. giving an ID to all point clouds matched with the building model, and then recording the coordinates and the size of the point clouds and the related parameters of the corresponding building model components;
c. when the point cloud model is loaded to the visual interface again, the categories, the sizes and the required related parameters of the part of the point cloud are marked in the point cloud map according to the ID of the point cloud, so that the point cloud semantic map is formed.
In the embodiment of the method for constructing the semantic map of the building scene, the semantic labeling is performed to form the rasterized semantic map, namely the point cloud map is rasterized first and then is visually labeled, so that the rasterized semantic map is formed.
The embodiment of the method for constructing the semantic map of the building scene, after the point cloud semantic map or the rasterized semantic map is formed, further comprises the steps of manual checking and correction:
a. manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy;
b. and manually correcting the map with inaccurate matching, wrong matching or no matching.
In a second aspect, an embodiment of the present invention provides an apparatus for constructing a semantic map of a building scene, including a three-dimensional reconstruction unit, configured to perform three-dimensional reconstruction on a building space; the point cloud segmentation unit is used for performing point cloud segmentation on the point cloud reconstructed in the three-dimensional mode and performing feature calculation and recording on the segmented point cloud in each shape; extracting information units of the window or the door; the semantic matching unit is used for aligning the coordinate systems of the building model and the point cloud model to perform semantic matching; and the semantic labeling unit is used for performing semantic labeling to form a point cloud semantic map or a rasterized semantic map.
In combination with the embodiment of the device for constructing the semantic map of the building scene, the three-dimensional reconstruction unit is used for scanning and collecting the point cloud data of the whole room by using a 3D camera, other sensors capable of acquiring the point cloud data or a scanner. The system comprises a six-degree-of-freedom mechanical arm, wherein a 3D camera is fixed at the tail end of the mechanical arm, and the 3D camera is used for capturing point cloud data in real time; or the other realizable mode is utilized, the hardware platform in the first wheel type mobile robot carrying mode is included, and the point cloud data collection with larger scale (whole room) is carried out.
In combination with a three-dimensional reconstruction unit, the method for collecting point cloud data of the whole room by using 3D camera scanning includes, as one of realizable modes:
the first unit is used for scanning point clouds, and performing down-sampling on the scanned point clouds by using a voxel filter to reduce the original data volume;
a second unit for: converting the scanned point cloud from a coordinate system where the scanning element is located to a robot arm base coordinate system to prepare for point cloud splicing;
the third unit is used for performing single-pass filtering and outlier filtering on the point cloud data again to filter some noises and outliers;
the fourth unit is used for capturing the point cloud again when the robot arm moves to another adjacent point, repeating the work from the first unit to the third unit and splicing the previously collected point cloud under the coordinate system of the robot arm;
and the fifth unit is used for determining that the scanning of all the point clouds in the reachable range of the robot arm is finished.
In combination with the three-dimensional reconstruction unit, the point cloud data of the whole room is collected by scanning with a 3D camera, other sensors that can acquire the point cloud data, or a scanner, and the two realizable modes include:
1) a progressive scan unit to: after the mechanical arm scans all point clouds in the current reachable range, the chassis drives the mechanical arm to move to the adjacent position for next mechanical arm scanning, and the first unit to the fifth unit work is repeated;
2) a splicing unit for: after scanning, splicing the point clouds again on a larger scale, wherein the splicing method comprises the steps of firstly transferring all the point clouds to a room coordinate system according to the positioning of a mobile chassis in a room, carrying out rough registration on the point clouds based on the chassis positioning of the room coordinate system, then carrying out fine registration by using an iterative closest point method (ICP), and splicing after the registration;
3) and the termination unit is used for determining that the three-dimensional reconstruction task is finished after the mobile robot carries the mechanical arm to scan the point cloud of the whole room.
With reference to the second aspect, in an embodiment of the apparatus for constructing a semantic map of a building scene provided by the present invention, the point cloud segmentation unit includes:
a segmentation unit to: for the point cloud reconstructed in three dimensions, a random sample consensus (RANSAC) algorithm is used for segmentation, and different shapes are segmented according to requirements each time;
and the denoising unit is used for performing outlier filtering on all the divided point cloud units to remove redundant noise.
For the embodiment of the device for constructing the semantic map of the building scene, the unit for extracting the information of the window or the door comprises the following steps:
the extraction unit is used for extracting the outline of the extracted planar point cloud;
the clustering unit is used for carrying out European clustering on all the extracted contours;
the filtering unit is used for carrying out outlier filtering on the clustering result to eliminate the noise of the clustering result;
and the marking unit is used for selecting a small rectangular contour contained in the large plane contour range, recording the coordinates of four vertexes of the small rectangular contour and marking the small rectangular contour as a mosaic component.
For the embodiment of the device for constructing the semantic map of the building scene, the semantic matching unit comprises:
determining a coordinate unit for: selecting a certain vertex angle in a room, and aligning the building model and the coordinate system of the point cloud model;
the first matching unit is used for traversing all the rectangular planes obtained by point cloud segmentation and matching every two rectangular planes in the building model; one of the achievable matching ways is: the Euclidean distance of four vertex coordinates of the two rectangles is smaller than a given threshold value, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold value;
the second matching unit is used for traversing all the embedded components and matching the embedded components with a rectangular plane in the building model; one of the achievable matching ways is the same as the first matching unit;
the third matching unit is used for traversing all the cylinders obtained by point cloud segmentation and performing pairwise matching with the cylinders in the building model; one of the ways in which matching can be achieved is: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
and the fourth matching unit is used for traversing point clouds in other shapes, matching the point clouds with the building model until all the segmented point clouds are matched.
For the embodiment of the device for constructing the semantic map of the building scene, the semantic labeling unit comprises the following components:
the query unit is used for querying the shape matched with the point cloud in the building model, querying the category of the building model and recording the category;
the marking unit is used for giving an ID to all point clouds matched with the building model and then recording the coordinates and the size of the point clouds and the related parameters of the corresponding building model components;
a semantic formation unit to: when the point cloud model is loaded to the visual interface again, the categories, the sizes and the required related parameters of the part of the point cloud are marked in the point cloud map according to the ID of the point cloud, and the point cloud semantic map is formed.
In the embodiment of the device for constructing the semantic map of the building scene, the semantic labeling unit further comprises a rasterizing unit which is used for rasterizing the point cloud map and then visually labeling the point cloud map to form the rasterized semantic map.
The embodiment of the device for constructing the semantic map of the building scene further comprises a manual checking and correcting unit:
the checking unit is used for manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy;
and the correcting unit is used for manually correcting the map which is not matched accurately or matched wrongly or is not matched.
In a third aspect, an embodiment of the present invention provides an apparatus for constructing a semantic map of a building scene, including one or more processors and a storage device; storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of building a semantic map of a construction scene as recited in any of the first aspects above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements a method for building a semantic map of a building scene according to any one of the first aspect.
The invention has the beneficial effects that:
according to the embodiment of the invention, the priori knowledge given by the BIM is fully utilized and then is combined with the scanned point cloud data to generate the semantic map. The method is simple and quick, has higher stability, and can be used for solving the problems of great difficulty and great application value by using a simple method beyond expectation.
The following description of embodiments of the invention is provided with reference to the accompanying drawings:
drawings
Fig. 1 is a block diagram of a structure of an apparatus for building a semantic map of a building scene according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of three-dimensional reconstruction provided in the embodiment of the present patent application.
Detailed Description
The specific embodiments described herein are merely illustrative of the principles of this patent and are not intended to limit the scope of the disclosure. It should be noted that, for convenience of description, only some structures related to the technical solution of the present disclosure are shown in the drawings, not all structures.
Before discussing exemplary embodiments in greater detail, it should be noted that the structures of the device components and/or the modules themselves mentioned in the embodiments, if not specified in detail, are those that can be understood or commercially available to those skilled in the art in light of the present disclosure.
1. Interpretation of point cloud maps and occupancy grid maps:
most of the conventional robot maps are point cloud maps or occupancy grid maps.
Point cloud map:
a point cloud is a disordered point with coordinate information. In a space coordinate system, if a point exists at a certain position, the position is indicated to have an obstacle, and no point exists, namely, a blank area.
Occupying a grid map:
in the case of a two-dimensional map, the grid is a pixel, and in the case of a three-dimensional map, the grid is a voxel. If a grid is occupied, indicating that an obstacle is present on this grid, the pixel or voxel is set to 1, and if it is not occupied, i.e. empty space, it is set to 0.
The point cloud map is usually inconvenient to be directly used for path planning of the robot, so that all points are rasterized to form an occupied grid map.
When the robot performs motion planning in the space, for safety, the planned path should be as far away from the occupied grids as possible while considering optimality so as not to collide with obstacles.
2. Comparing the semantic map with a point cloud or grid map:
the semantic map can be a point cloud map or an occupied grid map. But adds semantic information on the basis of these traditional maps.
The semantic information is to know not only whether a point exists at a certain position or whether a certain grid is occupied, but also which obstacle the point belongs to or the occupied grid belongs to. On a larger level, I can know whether a certain area is connected into a point cloud or whether an adjacent occupied grid is a wall, a pillar or a beam.
3. The application and interpretation of the semantic map in the field of the building robot are as follows:
the construction robot is a robot arm or a mobile chassis carrying the robot arm to perform the construction work of mechanical construction, and the work can comprise painting, putty spraying, polishing, plastering, floor tile pasting and the like.
Taking the example of painting, a robot painting inside a space requires that all walls, ceilings, pillars are painted full, but not to windows, house beams, pipes. If only a point cloud map or a grid map is occupied, the places which are not sprayed can only be shielded by newspaper, the whole room is sprayed once, and then the newspaper is taken away. If a point cloud map is available, the robot can know where in the map, i.e., in its surrounding environment, is a wall, a window, where spray should be made, and where spray should not be made. Thus, the robot can autonomously spray the sprayed place without the assistance of people, and the sprayed place can be avoided.
Fig. 2 is a three-dimensional reconstruction flowchart provided in this embodiment, and performs three-dimensional reconstruction on the whole room.
1. Hardware platform (either a or b may be implemented):
a. the six-degree-of-freedom mechanical arm is characterized in that a 3D camera is fixed at the tail end of the mechanical arm, and the 3D camera is used for capturing point cloud data in real time.
b. And (4) carrying a hardware platform in a by using one wheeled mobile robot to collect point cloud data of a larger scale (the whole room).
2. Performing three-dimensional reconstruction on the whole room:
2.1 the tail end of the mechanical arm starts to scan the point cloud with the 3D camera, and the scanned point cloud is subjected to down-sampling by using the voxel filter, so that the original data volume is reduced, and the calculated amount is reduced.
2.2 the scanned point cloud is converted from the camera coordinate system to the mechanical arm base coordinate system, and preparation is made for point cloud splicing, and the mechanical arm base is fixed during scanning and can be used as a basic coordinate system for point cloud splicing.
2.3 point cloud data is once more subjected to single-pass filtering and outlier filtering, mainly for filtering some noises and outliers, devices in the building are generally blocked, the size is large, and small outliers cannot exist.
And 2.4, moving the mechanical arm to another adjacent point, capturing the point cloud again, repeating the work of 2.1-2.3, and splicing the point cloud collected before in a mechanical arm coordinate system.
And 2.5, stopping moving when the mechanical arm finishes scanning all the point clouds in the reachable range.
For the b mode in the hardware platform, after the mechanical arm scans all point clouds in the current reachable range, the chassis drives the mechanical arm to move to the adjacent position for the next mechanical arm scanning, and the work is repeated for 2.1-2.5. After scanning, splicing the point clouds again on a larger scale, wherein the splicing method comprises the steps of firstly transferring all the point clouds to a room coordinate system according to the positioning of a mobile chassis in a room, carrying out rough registration on the point clouds based on the chassis positioning of the room coordinate system, then carrying out fine registration by using an iterative closest point method (ICP), and splicing after the registration. And after the mobile robot carries the mechanical arm to scan the point cloud of the whole room, finishing the three-dimensional reconstruction.
3. Point cloud segmentation:
3.1 for the point cloud reconstructed from the three-dimensional space, a random sample consensus (RANSAC) algorithm is used for segmentation, different shapes are segmented every time, and the common shapes in the building include planes, cylinders and the like. The wall, the house beam and the square column are all composed of one or a plurality of planes. The pipe or the round column is composed of a cylinder. In this process, the three-dimensionally reconstructed point cloud data is segmented into a plurality of common planes and a plurality of cylinders. Of course, other shapes may be segmented as desired.
The plane in the building is horizontal, flat and vertical, but not necessarily rectangular, and may be a pattern formed by splicing a plurality of rectangles, so that all the planes divided by the partitions can be disassembled into one or a plurality of rectangular or approximately rectangular planes.
And 3.2, performing outlier filtering on all the segmented point cloud units to remove redundant noise.
3.3, carrying out normal vector estimation on each rectangle after the disassembly, and finding and recording four vertex coordinates of the rectangle.
3.4 for the cylinder, solving the three-dimensional coordinates of the origin of the cylinder (the central point of a certain circular surface can be selected), the length and the radius of the circular surface, and recording the data.
And 3.5, performing feature calculation and recording on the point clouds in other shapes.
3.6 all the point cloud data are segmented and processed.
4. Extracting information of a window or a door:
because the window is embedded in the wall, and no blank area of the point cloud data exists, the window is special and needs special treatment.
4.1, extracting the outline of the extracted plane point cloud.
4.2 Euclidean clustering is carried out on all the extracted contours.
4.3 carrying out outlier filtering on the clustering result to eliminate the noise.
4.4 picking out the small rectangular outline contained in the large plane outline range, and recording the coordinates of the four vertexes thereof, and marking as a mosaic component.
5. Semantic matching:
5.1 align the building model with the coordinate system of the point cloud model, a certain top angle in the room can be selected.
5.2 traversing all the rectangular planes obtained by point cloud segmentation, and matching every two rectangular planes in the building model; one of the principles of achievable matching is: the Euclidean distances of four vertex coordinates of the two rectangles are smaller than a given threshold, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold;
5.3 traversing all the mosaic components, and matching with the rectangular plane in the building model, wherein the matching principle is the same as 5.2.
5.3 traversing all the cylinders obtained by point cloud segmentation, and matching every two cylinders with the cylinders in the building model; one of the principles of achievable matching is: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
and 5.4, traversing the point clouds in other shapes and matching with the building model.
And 5.5, finishing matching all the segmented point clouds.
6. Semantic annotation:
6.1, inquiring the shape matched with the point cloud in the building model, inquiring the category of the building model and recording the category.
6.2, all the point clouds matched with the building model are given with an ID, then the coordinates and the size of the point cloud are recorded, and the relevant parameters (such as the material and other relevant information of the building) of the building model component corresponding to the part of point cloud are also recorded together.
6.3 when the point cloud model is loaded to the visual interface again, according to the ID of the point cloud, all the information required by the category, size, material, etc. to which the point cloud belongs (the user can also select and add the required information, such as building related information, for example, material of building materials, etc.) is marked in the point cloud map according to the coordinate position, that is, the point cloud semantic map is formed.
6.4, the point cloud map can be rasterized first and then visually marked, so that a rasterized semantic map is formed.
7. Manual check and correction:
7.1, manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy.
7.2 the map is corrected manually for inaccurate or incorrect or no match.
According to the implementation scheme of the technology, the priori knowledge given by the BIM is fully utilized, and then the semantic map generated by combining the scanned point cloud data is simple and quick, and high in stability.
FIG. 1 is a block diagram of an embodiment of an apparatus for building a semantic map of a building scene. As shown in fig. 1, the apparatus for constructing a semantic map of a building scene provided by the present invention includes a three-dimensional reconstruction unit 10, configured to perform three-dimensional reconstruction on a building space; the point cloud segmentation unit 20 is used for performing point cloud segmentation on the three-dimensional reconstructed point cloud, and performing feature calculation and recording on the segmented point cloud in each shape; an information unit 30 for extracting a window or a door; the semantic matching unit 40 is used for aligning the coordinate systems of the building model and the point cloud model to perform semantic matching; and the semantic labeling unit 50 is used for performing semantic labeling to form a point cloud semantic map or a rasterized semantic map.
And the three-dimensional reconstruction unit is used for scanning and collecting point cloud data of the whole room. One of the realizable modes comprises a six-degree-of-freedom mechanical arm, wherein a 3D camera, other sensors capable of acquiring point cloud data or a scanner is fixed at the tail end of the mechanical arm and is used for capturing the point cloud data in real time; or the other realizable mode is utilized, the hardware platform in the first wheel type mobile robot carrying mode is included, and the point cloud data collection with larger scale (whole room) is carried out.
In combination with the three-dimensional reconstruction unit, as one of realizable ways, includes:
a first unit to: scanning the point cloud, and performing down-sampling on the scanned point cloud by using a voxel filter to reduce the original data volume, namely the calculated volume;
a second unit for: converting the scanned point cloud from the coordinate system of the scanning element to the coordinate system of the mechanical arm base to prepare for point cloud splicing;
the third unit is used for performing single-pass filtering and outlier filtering on the point cloud data again to filter some noises and outliers;
the fourth unit is used for capturing the point cloud again when the robot arm of the mechanical arm moves to another adjacent point, repeating the work from the first unit to the third unit and splicing the point cloud collected before in a coordinate system of the mechanical arm;
and the fifth unit is used for determining that the mechanical arm finishes scanning all the point clouds in the reachable range and stopping moving.
In combination with the three-dimensional reconstruction unit, the method for collecting point cloud data of the whole room by using 3D camera scanning includes:
1) a progressive scan unit to: after the mechanical arm scans all point clouds in the current reachable range, the chassis drives the mechanical arm to move to the adjacent position for next mechanical arm scanning, and the first unit to the fifth unit work is repeated;
2) a splicing unit for: after scanning, splicing the point clouds again on a larger scale, wherein the splicing method comprises the steps of firstly transferring all the point clouds to a room coordinate system according to the positioning of a mobile chassis in a room, carrying out rough registration on the point clouds based on the chassis positioning of the room coordinate system, then carrying out fine registration by using an iterative closest point method (ICP), and splicing after the registration;
3) and the termination unit is used for determining that the three-dimensional reconstruction task is finished after the mobile robot carries the mechanical arm to scan the point cloud of the whole room.
The point cloud segmentation unit comprises:
a segmentation unit to: for the point cloud reconstructed in three dimensions, a random sample consensus (RANSAC) algorithm is used for segmentation, and different shapes are segmented according to requirements each time;
and the denoising unit is used for performing outlier filtering on all the divided point cloud units to remove redundant noise.
The information unit for extracting the window or the door comprises:
the extraction unit is used for extracting the outline of the extracted planar point cloud;
the clustering unit is used for carrying out European clustering on all the extracted contours;
the filtering unit is used for carrying out outlier filtering on the clustering result to eliminate the noise of the clustering result;
and the marking unit is used for selecting a small rectangular contour contained in the large plane contour range, recording the coordinates of four vertexes of the small rectangular contour and marking the small rectangular contour as a mosaic component.
The semantic matching unit comprises:
a coordinate determining unit for aligning the building model and the point cloud model, and selecting a certain vertex angle in a room;
a first matching unit to: traversing all the rectangular planes obtained by point cloud segmentation, and matching every two rectangular planes in the building model, wherein the adopted matching mode is as follows: the Euclidean distance of four vertex coordinates of the two rectangles is smaller than a given threshold value, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold value;
a second matching unit for: traversing all the embedded components, matching with a rectangular plane in the building model, wherein the adopted matching mode is the same as that of the first matching unit;
a third matching unit for: traversing all the cylinders obtained by point cloud segmentation, and matching every two cylinders in the building model, wherein the adopted matching mode is as follows: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
and the fourth matching unit is used for traversing point clouds in other shapes, matching the point clouds with the building model until all the segmented point clouds are matched.
The semantic annotation unit comprises:
the query unit is used for querying the shape matched with the point cloud in the building model, querying the category of the building model and recording the category;
a labeling unit for: giving an ID to all point clouds matched with the building model, recording the coordinates and the size of the point clouds, and recording related parameters (such as the material and other related information of the building) of the building model component corresponding to the point clouds;
a semantic formation unit to: when the point cloud model is loaded to the visual interface again, the required information such as the category, the size, the material and the like of the part of point cloud is marked in the point cloud map according to the ID of the point cloud (a user can also select and add the required information such as building related information, for example, the material of building materials and the like), and the coordinate position is marked in the point cloud map, so that the point cloud semantic map is formed.
The semantic labeling unit also comprises a rasterization unit which is used for rasterizing the point cloud map and then carrying out visual labeling to form a rasterized semantic map.
For the embodiment of the apparatus for constructing a semantic map of a building scene, a further preferred embodiment further includes a manual checking and correcting unit:
the checking unit is used for manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy;
and the correcting unit is used for manually correcting the map which is not matched accurately or matched wrongly or is not matched.
The function and application examples of each unit of the device for constructing the semantic map of the building scene are shown above.
In an implementation design, an embodiment of the present invention further provides an apparatus for building a semantic map of a building scene, including one or more processors and a storage device; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors implement any optional method for building a semantic map of a building scene in the invention. A communication interface may also be included for communicating with other devices or communication networks.
On the other hand, embodiments of the present invention may also provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements any one of the above methods for building a semantic map of a building scene provided in the present invention.
Those skilled in the art will appreciate that all or part of the steps provided for implementing the method of the above embodiments may be implemented by hardware that is related to instructions of a program, which may be stored in a computer-readable storage medium, and the program, when executed, includes one or a combination of the steps of the method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (19)

1. A method for constructing a semantic map of a building scene comprises the steps of carrying out three-dimensional reconstruction on a building space, carrying out point cloud segmentation on point clouds obtained by the three-dimensional reconstruction, and carrying out feature calculation and recording on the segmented point clouds in each shape; extracting information of a window or a door; and aligning the coordinate systems of the building model and the point cloud model for semantic matching, and finally performing semantic labeling to form a point cloud semantic map or a rasterized semantic map.
2. The method of building scene semantic map as claimed in claim 1, characterized in that: the three-dimensional reconstruction of the building space refers to the collection of point cloud data of the whole room by scanning a 3D camera carried by a mobile robot arm.
3. The method of building scene semantic map as claimed in claim 1, characterized in that: the three-dimensional reconstruction of the building space comprises the following steps:
a. scanning and collecting point clouds of the whole room, and performing down-sampling on the scanned point clouds by using a voxel filter to reduce the original data volume;
b. converting the scanned point cloud from a scanning device coordinate system to a robot arm base coordinate system to prepare for point cloud splicing;
c. performing single-pass filtering and outlier filtering on the point cloud data again to filter some noises and outliers;
d. when the robot arm moves to another adjacent point, capturing the point cloud again, repeating the work of a-c, and splicing the previously collected point cloud under the coordinate system of the robot arm; and finishing scanning and splicing all the point clouds in the reachable range.
4. The method of building scene semantic map as claimed in claim 1, characterized in that: the point cloud segmentation of the three-dimensional reconstructed point cloud comprises the following steps:
a. for the point cloud reconstructed from the three-dimensional reconstruction, the point cloud is segmented by using an algorithm of random sampling consistency, and different shapes are segmented according to the requirements each time;
b. and performing outlier filtering on all the segmented point cloud units to remove redundant noise.
5. The method of building scene semantic map of claim 4, wherein: classifying the divided different shapes into plane, cylinder, rectangle and other shape point clouds, solving the origin three-dimensional coordinate, length and circular surface radius of the cylinder, and recording the data; performing normal vector estimation on each rectangle after disassembly, and finding and recording four vertex coordinates of the rectangle; and (4) performing feature calculation and recording on the point clouds in other shapes.
6. The method of building scene semantic map as claimed in claim 1, characterized in that: the extracting of the information of the window or the door comprises the following steps:
a. extracting the outline of the extracted plane point cloud;
b. carrying out Euclidean clustering on all the extracted contours;
c. carrying out outlier filtering on the clustering result to eliminate the noise of the clustering result;
d. a small rectangular outline contained in a large planar outline is picked out, and coordinates of four vertexes of the outline are recorded and marked as a mosaic component.
7. The method of building scene semantic map of claim 4, wherein: the semantic matching is carried out on the coordinate systems of the alignment building model and the point cloud model, and the semantic matching comprises the following steps:
a. selecting a certain vertex angle in a room, and aligning the building model and the coordinate system of the point cloud model;
b. traversing all the rectangular planes obtained by point cloud segmentation, and matching every two rectangular planes in the building model according to a matching principle: the Euclidean distances of four vertex coordinates of the two rectangles are smaller than a given threshold, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold;
c. traversing all the embedded components, and matching with a rectangular plane in the building model according to the matching principle b;
d. traversing all the cylinders obtained by point cloud segmentation, and matching every two cylinders with the cylinders in the building model; matching principle: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
e. and traversing point clouds in other shapes, and matching the point clouds with the building model until all the segmented point clouds are matched.
8. The method of building scene semantic map as claimed in claim 1, characterized in that: the semantic labeling forms a point cloud semantic map, which comprises the following steps:
a. inquiring the shape matched with the point cloud in the building model, inquiring the category of the shape, and recording the category;
b. giving an ID to all point clouds matched with the building model, and then recording the coordinates and the size of the point clouds and the related parameters of the corresponding building model components;
c. when the point cloud model is loaded to the visual interface again, the categories, the sizes and the required related parameters of the part of the point cloud are marked in the point cloud map according to the ID of the point cloud, and the point cloud semantic map is formed.
9. The method of building scene semantic map as claimed in claim 1, characterized in that: after the point cloud semantic map or the rasterized semantic map is formed, the method further comprises the steps of manual checking and correction:
a. manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy;
b. and manually correcting the map with inaccurate matching, wrong matching or no matching.
10. A device for constructing a semantic map of a building scene is characterized in that: the system comprises a three-dimensional reconstruction unit, a three-dimensional reconstruction unit and a control unit, wherein the three-dimensional reconstruction unit is used for performing three-dimensional reconstruction on a building space; the point cloud segmentation unit is used for performing point cloud segmentation on the point cloud reconstructed in the three-dimensional mode and performing feature calculation and recording on the segmented point cloud in each shape; extracting information units of the window or the door; the semantic matching unit is used for aligning the coordinate systems of the building model and the point cloud model to perform semantic matching; and the semantic labeling unit is used for performing semantic labeling to form a point cloud semantic map or a rasterized semantic map.
11. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the three-dimensional reconstruction unit is used for scanning and collecting point cloud data of the whole room and comprises a robot arm, wherein a 3D camera, other sensors or scanners capable of acquiring the point cloud data are fixed at the tail end of the robot arm, and the 3D camera, other sensors or scanners capable of acquiring the point cloud data are used for capturing the point cloud data in real time; or the system also comprises a hardware platform of a wheeled mobile robot carrying scanning equipment, and the hardware platform is used for collecting point cloud data of the whole room.
12. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the three-dimensional reconstruction unit comprises:
the first unit is used for scanning and collecting point cloud data, and performing down-sampling on the scanned point cloud by using a voxel filter to reduce the original data volume;
a second unit for: converting the scanned point cloud from a coordinate system where the scanning element is located to a robot arm base coordinate system to prepare for point cloud splicing;
the third unit is used for performing single-pass filtering and outlier filtering on the point cloud data again to filter some noises and outliers;
the fourth unit is used for capturing the point cloud again when the robot arm moves to another adjacent point, repeating the work from the first unit to the third unit and splicing the previously collected point cloud under the coordinate system of the robot arm;
and the fifth unit is used for determining that the scanning of all the point clouds in the reachable range of the robot arm is finished.
13. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the point cloud segmentation unit comprises:
a segmentation unit to: for the point cloud reconstructed from the three-dimensional reconstruction, the point cloud is segmented by using an algorithm of random sampling consistency, and different shapes are segmented according to the requirements each time;
and the denoising unit is used for performing outlier filtering on all the divided point cloud units to remove redundant noise.
14. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the information unit for extracting the window or the door comprises:
the extraction unit is used for extracting the outline of the extracted planar point cloud;
the clustering unit is used for carrying out European clustering on all the extracted contours;
the filtering unit is used for carrying out outlier filtering on the clustering result to eliminate the noise of the clustering result;
and the marking unit is used for selecting a small rectangular contour contained in the large plane contour range, recording the coordinates of four vertexes of the small rectangular contour and marking the small rectangular contour as a mosaic component.
15. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the semantic matching unit comprises:
determining a coordinate unit for: selecting a certain vertex angle in a room, and aligning the building model and the coordinate system of the point cloud model;
the first matching unit is used for traversing all the rectangular planes obtained by point cloud segmentation and matching every two rectangular planes in the building model; the matching mode is as follows: the Euclidean distances of four vertex coordinates of the two rectangles are smaller than a given threshold, and the angle difference of the normal vectors of the two rectangles is smaller than the given threshold;
the second matching unit is used for traversing all the embedded components and matching the embedded components with a rectangular plane in the building model; the matching mode is the same as that of the first matching unit;
the third matching unit is used for traversing all the cylinders obtained by point cloud segmentation and performing pairwise matching with the cylinders in the building model; the matching mode is as follows: the Euclidean distance of origin coordinates of the two cylinders is smaller than a given threshold value, the height difference of the two cylinders is smaller than the given threshold value, and the radius difference of the circular surfaces of the two cylinders is smaller than the given threshold value;
and the fourth matching unit is used for traversing point clouds in other shapes, matching the point clouds with the building model until all the segmented point clouds are matched.
16. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: the semantic annotation unit comprises:
the query unit is used for querying the shape matched with the point cloud in the building model, querying the category of the building model and recording the category;
the marking unit is used for giving an ID to all point clouds matched with the building model and then recording the coordinates and the size of the point clouds and the related parameters of the corresponding building model components;
a semantic formation unit to: when the point cloud model is loaded to the visual interface again, the categories, the sizes and the required related parameters of the part of the point cloud are marked in the point cloud map according to the ID of the point cloud, and the point cloud semantic map is formed.
17. The apparatus for building semantic maps of architectural scenes of claim 10, wherein: still include artifical checksum correction unit:
the checking unit is used for manually checking all automatically marked coordinates and semantic information, and checking the coordinates and the matching accuracy;
and the correcting unit is used for manually correcting the map which is not matched accurately or matched wrongly or is not matched.
18. An apparatus for building a semantic map of a building scene, comprising one or more processors, a storage device; storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of building a semantic map of a construction scene as recited in any one of claims 1-9.
19. A computer-readable storage medium, storing a computer program which, when executed by a processor, implements a method of building scene semantic maps according to any one of claims 1 to 9.
CN202010636372.9A 2020-07-03 2020-07-03 Method, device and storage medium for building scene semantic map construction Pending CN113888691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010636372.9A CN113888691A (en) 2020-07-03 2020-07-03 Method, device and storage medium for building scene semantic map construction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010636372.9A CN113888691A (en) 2020-07-03 2020-07-03 Method, device and storage medium for building scene semantic map construction

Publications (1)

Publication Number Publication Date
CN113888691A true CN113888691A (en) 2022-01-04

Family

ID=79013286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010636372.9A Pending CN113888691A (en) 2020-07-03 2020-07-03 Method, device and storage medium for building scene semantic map construction

Country Status (1)

Country Link
CN (1) CN113888691A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463411A (en) * 2022-01-19 2022-05-10 无锡学院 Target volume, mass and density measuring method based on three-dimensional camera
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116416586A (en) * 2022-12-19 2023-07-11 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN117213469A (en) * 2023-11-07 2023-12-12 中建三局信息科技有限公司 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall
CN117274353A (en) * 2023-11-20 2023-12-22 光轮智能(北京)科技有限公司 Synthetic image data generating method, control device and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463411A (en) * 2022-01-19 2022-05-10 无锡学院 Target volume, mass and density measuring method based on three-dimensional camera
CN114463411B (en) * 2022-01-19 2023-02-28 无锡学院 Target volume, mass and density measuring method based on three-dimensional camera
CN116416586A (en) * 2022-12-19 2023-07-11 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN116416586B (en) * 2022-12-19 2024-04-02 香港中文大学(深圳) Map element sensing method, terminal and storage medium based on RGB point cloud
CN115655262A (en) * 2022-12-26 2023-01-31 广东省科学院智能制造研究所 Deep learning perception-based multi-level semantic map construction method and device
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116246069B (en) * 2023-02-07 2024-01-16 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN117213469A (en) * 2023-11-07 2023-12-12 中建三局信息科技有限公司 Synchronous positioning and mapping method, system, equipment and storage medium for subway station hall
CN117274353A (en) * 2023-11-20 2023-12-22 光轮智能(北京)科技有限公司 Synthetic image data generating method, control device and readable storage medium
CN117274353B (en) * 2023-11-20 2024-02-20 光轮智能(北京)科技有限公司 Synthetic image data generating method, control device and readable storage medium

Similar Documents

Publication Publication Date Title
CN113888691A (en) Method, device and storage medium for building scene semantic map construction
CN104536445B (en) Mobile navigation method and system
Hong et al. Semi-automated approach to indoor mapping for 3D as-built building information modeling
US9811714B2 (en) Building datum extraction from laser scanning data
Prieto et al. As-is building-structure reconstruction from a probabilistic next best scan approach
CN115290097B (en) BIM-based real-time accurate map construction method, terminal and storage medium
Bassier et al. Standalone terrestrial laser scanning for efficiently capturing AEC buildings for as-built BIM
Cefalu et al. Image based 3D Reconstruction in Cultural Heritage Preservation.
Iocchi et al. Building 3d maps with semantic elements integrating 2d laser, stereo vision and imu on a mobile robot
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
Yoon et al. Practical implementation of semi-automated as-built BIM creation for complex indoor environments
Bassier et al. Evaluation of data acquisition techniques and workflows for Scan to BIM
Shukor et al. 3d modeling of indoor surfaces with occlusion and clutter
Tiozzo Fasiolo et al. Combining LiDAR SLAM and deep learning-based people detection for autonomous indoor mapping in a crowded environment
Nardinocchi et al. Building extraction from LIDAR data
Nakagawa et al. Topological 3D modeling using indoor mobile LiDAR data
Khoshelham et al. Registering point clouds of polyhedral buildings to 2D maps
CN113886903A (en) Method, device and storage medium for constructing global SLAM map
Alboul et al. A system for reconstruction from point clouds in 3D: Simplification and mesh representation
Nüchter et al. Consistent 3D model construction with autonomous mobile robots
Fairfield et al. Evidence grid-based methods for 3D map matching
Wichmann et al. Joint simultaneous reconstruction of regularized building superstructures from low-density LIDAR data using icp
CN117058358B (en) Scene boundary detection method and mobile platform
Hwang et al. Surface estimation ICP algorithm for building a 3D map by a scanning LRF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination