CN114219909A - Three-dimensional reconstruction method and related device - Google Patents
Three-dimensional reconstruction method and related device Download PDFInfo
- Publication number
- CN114219909A CN114219909A CN202111320898.7A CN202111320898A CN114219909A CN 114219909 A CN114219909 A CN 114219909A CN 202111320898 A CN202111320898 A CN 202111320898A CN 114219909 A CN114219909 A CN 114219909A
- Authority
- CN
- China
- Prior art keywords
- point
- wall
- point cloud
- dimensional
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000011218 segmentation Effects 0.000 claims abstract description 32
- 238000012163 sequencing technique Methods 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 238000007670 refining Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20068—Projection on vertical or horizontal image axis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a three-dimensional reconstruction method and a related device, wherein the method comprises the following steps: obtaining three-dimensional point cloud data containing a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image; performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms; wherein the first point cloud comprises a wall point cloud and a ceiling point cloud; carrying out two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image; wherein the second image comprises a plurality of wall data points; obtaining a wall boundary line from the wall data points, and obtaining a closed polygon from the wall boundary line according to the end point support degree and the line segment support degree; and stretching the closed polygon to obtain a three-dimensional model of the room. Through the design mode, the accuracy of wall reconstruction can be guaranteed, and therefore the accurate three-dimensional reconstruction model of a room is obtained.
Description
Technical Field
The present disclosure relates to the field of point cloud processing technologies, and in particular, to a three-dimensional reconstruction method and a related apparatus.
Background
Indoor three-dimensional models have important roles in many fields, such as games, tourism, public building location and navigation services, and the like. Meanwhile, as the requirements of people on product quality and experience are improved, the requirements on the update speed of the indoor environment model and the detail degree of the space model are higher and higher, so that the research on the automatic reconstruction of the indoor model is very necessary. In addition, the development of laser scanning systems enables point cloud data to be widely applied and become one of the main data sources for building modeling. Compared with a two-dimensional image, the three-dimensional point cloud can express the geometric details of an object, and is more suitable for modeling work.
The reconstruction of buildings using point clouds can be divided into reconstruction from outdoor scan data and reconstruction using indoor scan data. Compared with an outdoor scene room, reconstruction of an indoor scene is more challenging, because the indoor scene often has higher disorder degree, and the shielding of the acquired point cloud data is serious, although the point cloud-based indoor three-dimensional reconstruction method is rapidly developed, automatic reconstruction of a parameterized model of a building from a disordered indoor environment is still difficult, and the accuracy of reconstruction of the inner wall of the room cannot be ensured. Therefore, a new three-dimensional reconstruction method is needed to solve the above problems.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a three-dimensional reconstruction method and a related device, which can ensure the accuracy of reconstruction of the inner wall of a room.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a three-dimensional reconstruction method including: the method comprises the steps of obtaining three-dimensional point cloud data including a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image; performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms; wherein the first point cloud comprises a wall point cloud and a ceiling point cloud; carrying out two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image; wherein the second image includes a plurality of wall data points therein; obtaining a wall boundary line from the wall data points, and obtaining a closed polygon from the wall boundary line according to the end point support degree and the line segment support degree; and stretching the closed polygon to obtain a three-dimensional model of the room.
Wherein the step of obtaining a wall boundary line from the wall data points and obtaining a closed polygon from the wall boundary line according to an endpoint support and a line segment support comprises: extracting a straight line from each wall data point, and combining all the straight lines to obtain at least one straight line set; wherein the set of straight lines comprises at least one straight line belonging to the same wall; intersecting straight lines in the straight line set to obtain the wall boundary line; screening out an optimal line segment from the wall boundary line according to the end point support degree and the line segment support degree; the end point support degree is the number of line segments with the line segment support degree larger than zero and connected with each end point, and the line segment support degree is the product of the reliability weighted value of each internal point in the bounding box and the point cloud density of all the internal points in the bounding box; deleting redundant line segments other than the optimal line segment and complementing missing parts of the optimal line segment to obtain the closed polygon; wherein each endpoint support of the closed polygon is two.
Wherein, before the step of screening out the optimal line segment from the wall boundary line according to the endpoint support degree and the line segment support degree, the method comprises the following steps: obtaining a bounding box by using the end point of the boundary line of the wall, wherein the bounding box comprises at least one inner point; obtaining a ratio between a first distance between the inner point and the wall boundary line and a first threshold value, and a difference value between one and the ratio value, obtaining a product between the difference value and a covariance matrix of the inner point, and taking a sum of products of all the inner points in the bounding box as the reliability weighted value; wherein the first distance is less than or equal to the first threshold.
Before the step of performing two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image, the method comprises the following steps: segmenting a first point cloud of the rooms to obtain a wall point cloud in each of the rooms; performing angle slicing on the wall point cloud according to a preset angle interval to obtain a second distance between the wall point cloud and the central point of all the first point clouds in each room; performing distance slicing on the wall point cloud according to the second distance and a preset distance interval; and removing sparse noise points and point clouds representing a horizontal plane in each room by using the number of the wall point clouds subjected to double slicing.
Wherein the step of stretching the closed polygon to obtain the three-dimensional model of the room is followed by: extracting a second point cloud in the three-dimensional model; wherein the second point cloud belongs to a wall in the three-dimensional model; dividing the second point cloud according to the height information of the three-dimensional model to obtain at least one horizontal slice, and dividing the second point cloud according to the horizontal angle of the point to obtain at least one vertical slice; extracting a first contour point in a first direction perpendicular to the ground by using the horizontal slice, and extracting a second contour point in a second direction horizontal to the ground by using the vertical slice; clustering the first contour points and the second contour points to obtain at least one door and window contour point, and projecting the door and window contour points to a corresponding wall in the three-dimensional model.
Wherein the step of extracting a first contour point in a first direction perpendicular to the ground using the horizontal slice includes: sequencing the wall point clouds in the horizontal slice according to the horizontal angle of the second point cloud in the horizontal slice, and obtaining a third distance between the wall point cloud in the horizontal slice and the adjacent wall point cloud; in response to the third distance satisfying a second threshold, treat the neighboring second point cloud as the first contour point; the step of extracting a second contour point in a second direction horizontal to the ground using the vertical slice includes: sequencing the wall point clouds in the vertical slice according to the height value of the second point cloud in the vertical slice, and obtaining a fourth distance between the wall point cloud in the vertical slice and the adjacent wall point cloud; in response to the fourth distance satisfying a third threshold, treat the neighboring second point cloud as the second contour point.
After the step of clustering the first contour points and the second contour points to obtain at least one door and window contour point and projecting the door and window contour points onto the corresponding wall in the three-dimensional model, the method comprises the following steps: sequencing the first contour points and the second contour points according to the direction of the wall, and obtaining head contour points and tail contour points according to the sequencing result; sorting the first contour points and the second contour points according to the height values, and obtaining highest contour points and lowest contour points according to sorting results; performing primary fitting on the head contour point, the tail contour point, the highest contour point and the lowest contour point to obtain an initial enclosing rectangle; extracting a door and window contour point close to each side by using the four sides of the initial surrounding rectangle, and performing quadratic fitting on each side by using a random sampling consistency algorithm; and responding to the graph after quadratic fitting being a quadrangle, and outputting the initial enclosing rectangle.
The step of performing room segmentation on the first image and segmenting the three-dimensional point cloud data according to a room segmentation result of the first image to obtain first point clouds of the rooms includes: carrying out binarization on the first image to obtain gaps among the rooms, and obtaining the edges of the gaps by utilizing an edge extraction algorithm; refining the edge by using image expansion and image closing operation; carrying out Hough transform on the edge to obtain a line segment corresponding to the edge, and optimizing the line segment to merge straight lines belonging to the same wall boundary; carrying out image subtraction operation on the widened line segments and the first image to obtain a third image; wherein the third image comprises the room and the wall boundary line; and dividing the rooms according to the wall boundary lines, and separating the first point clouds corresponding to the rooms.
Before the step of dividing the rooms according to the wall boundary lines and separating the first point clouds corresponding to the rooms, the method further includes: and responding to the existence of noise points in the third image, removing the noise points, and entering the step of dividing the rooms according to the wall boundary line and separating the first point clouds corresponding to the rooms.
Before the step of performing two-dimensional projection on the three-dimensional point cloud data to form a first image, the method comprises the following steps of: and preprocessing the three-dimensional point cloud data.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a three-dimensional reconstruction system, comprising a memory and a processor coupled to each other, wherein the memory stores program instructions, and the processor is configured to execute the program instructions to implement the three-dimensional reconstruction method mentioned in any of the above embodiments.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer-readable storage medium storing a computer program for implementing the three-dimensional reconstruction method mentioned in any one of the above embodiments.
Different from the prior art, the beneficial effects of the application are that: the three-dimensional reconstruction method provided by the application comprises the following steps: obtaining three-dimensional point cloud data containing a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image; performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms; wherein the first point cloud comprises a wall point cloud; carrying out two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image; wherein the second image comprises a plurality of wall data points; obtaining a wall boundary line from the wall data points, and obtaining a closed polygon from the wall boundary line according to the end point support degree and the line segment support degree; and stretching the closed polygon to obtain a three-dimensional model of the room. Through the design mode, the accuracy of wall reconstruction can be guaranteed, and therefore the accurate three-dimensional reconstruction model of a room is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is an overall flow chart of the three-dimensional reconstruction method of the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of a three-dimensional reconstruction method according to the present application;
FIG. 3 is a density histogram of a ceiling floor segmentation;
FIG. 4 is a diagram of the effect of the room segmentation process;
FIG. 5 is a schematic flow chart illustrating an embodiment of step S2 in FIG. 2;
FIG. 6 is a diagram of the effect of the room reconstruction process;
FIG. 7 is a schematic flow chart illustrating an embodiment of the method before step S3 in FIG. 2;
FIG. 8 is a diagram of dual slice denoising concept;
FIG. 9 is a schematic flow chart diagram illustrating an embodiment of step S4 in FIG. 2;
FIG. 10 is a diagram illustrating the door and window extraction process;
FIG. 11 is a schematic view of a vertical slice;
FIG. 12 is a schematic flow chart diagram illustrating an embodiment after step S9 in FIG. 2;
FIG. 13 is a schematic structural diagram of an embodiment of a three-dimensional reconstruction system of the present application;
FIG. 14 is a block diagram of an embodiment of a three-dimensional reconstruction system of the present application;
FIG. 15 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
And reconstructing three-dimensional point cloud data of a room obtained from the interior into a regularized and parameterized building model aiming at the reconstruction work of the indoor building. The main conditions that the present application is effective for typical indoor environments are: the ceiling and the floor are plane and parallel structures connected by vertical walls, the different rooms and corridors are connected by doorways etc. by narrow passages. Referring to fig. 1, fig. 1 is an overall flowchart of a three-dimensional reconstruction method according to the present application. As shown in fig. 1, the main process of the three-dimensional reconstruction method in the present application is to automatically perform room segmentation on the preprocessed three-dimensional point cloud data by using a morphological algorithm, reconstruct the segmented room into a parameterized geometric model by using a new line segment selection strategy, and finally extract windows and doors by using horizontal and vertical slices and adjacent point distance thresholds and optimize the windows and doors.
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of a three-dimensional reconstruction method according to the present application. The three-dimensional reconstruction method comprises the following steps:
s1: and obtaining three-dimensional point cloud data containing a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image.
Specifically, in the present embodiment, please refer to fig. 2-4 together, fig. 3 is a density histogram of ceiling and floor segmentation, and fig. 4 is a flow chart of room segmentation. The two-dimensional projection of the three-dimensional point cloud data to form the first image in step S1 includes: and preprocessing the three-dimensional point cloud data. In this embodiment, the preprocessing step may be: firstly, three-dimensional point cloud data is subjected to down-sampling, and then noise outside a building is removed by using a morphological algorithm. Specifically, in this embodiment, the downsampling may be performed by using a voxelized grid method, and the size of the voxelized grid is set to be V. In addition, in the ceiling extraction, the z-axis is used for slicing the original three-dimensional point cloud data by a certain thickness, a histogram, namely a density histogram of the ceiling and floor segmentation shown in fig. 3, is calculated according to the number of points in each slice, so that the height information with the maximum number of points is found, namely the ceiling or the floor, and then the ceiling is judged by utilizing the height information. As shown in fig. 4, the preprocessed three-dimensional point cloud data is binary-mapped into a two-dimensional space to form a first image (fig. 4 a).
S2: and performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms.
Specifically, the first point cloud comprises a wall point cloud.
Specifically, in the present embodiment, please refer to fig. 4 and fig. 5, and fig. 5 is a flowchart illustrating an implementation manner of step S2 in fig. 2. Specifically, step S2 includes:
s20: and carrying out binarization on the first image to obtain gaps among all rooms, and obtaining the edges of the gaps by using an edge extraction algorithm.
Specifically, as shown in fig. 4b, the first image (fig. 4a) is binarized to highlight the gaps between the rooms, and then the edges of the room gaps are extracted using an edge extraction algorithm (fig. 4 c).
S21: and thinning the edge by utilizing image expansion and image closing operation.
Specifically, as shown in fig. 4d and 4e, the edges in fig. 4c are thinned by image expansion and image closing operation, and small black spots (noise) in the image region can be eliminated by a method of expansion first and then erosion.
S22: and carrying out Hough transform on the edge to obtain a line segment corresponding to the edge, and optimizing the line segment to merge straight lines belonging to the same wall boundary.
Specifically, as shown in fig. 4f, the edges (fig. 4e) that have undergone image expansion and image closing operations are subjected to hough transform to parameterize each edge into a corresponding line segment. In this embodiment, the line segments obtained in fig. 4f are optimized to merge straight lines belonging to the same wall boundary (i.e. to disconnect the connections of the rooms), as shown in fig. 4 g.
S23: and carrying out image subtraction operation on the widened line segments and the first image to obtain a third image.
Specifically, as shown in fig. 4h, the widened line segments (fig. 4g) and the first image (fig. 4a) are subjected to an image subtraction operation to obtain a third image. In the present embodiment, the third image includes a room and wall boundary line.
S24: and dividing the rooms according to the boundary line of the wall, and separating the first point clouds corresponding to the rooms.
Specifically, the room in the third image is segmented by using a watershed algorithm, as shown in fig. 4k, in this embodiment, the room may be segmented according to the wall boundary line in the third image, and the segmentation mode is not limited herein. As shown in fig. 4l, the first point clouds corresponding to the rooms are divided.
Through the design mode, the rooms can be rapidly segmented by using the morphological processing method, the connection among the rooms is broken through edge detection, Hough transform and optimization algorithm in combination with morphological operation aiming at the problem of excessive segmentation or insufficient segmentation, then the rooms are segmented by using the watershed segmentation algorithm, and the rooms can be accurately and rapidly segmented by using the method.
Optionally, in this embodiment, in order to make the reconstructed room model more accurate, as shown in fig. 4i and 4j, before step S24, the method further includes: judging whether noise exists in the third image or not; if yes, removing noise and entering step S24; otherwise, the process proceeds directly to step S24.
S3: and performing two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image.
Specifically, the second image includes a plurality of wall data points therein. Specifically, in this embodiment, the room reconstruction is performed after the room segmentation is completed. Referring to fig. 6, fig. 6 is a flowchart illustrating the effect of the room reconstruction process. And (3) taking a slice close to the point cloud of the ceiling and with a downward preset distance from the denoised wall point cloud (figure 6c) of the double slices, performing two-dimensional projection on the slice to an XY plane to obtain a second image (figure 6d), wherein the slice can reduce the data volume of the point cloud to be processed, so that the time can be greatly saved.
Preferably, in the present embodiment, please refer to fig. 6-8 together, fig. 7 is a flowchart illustrating an embodiment before step S3 in fig. 2, and fig. 8 is a schematic diagram illustrating a dual slice denoising principle. Step S3 is preceded by:
s30: the first point cloud of the rooms is segmented to obtain a point cloud of walls in each room.
Specifically, the first point cloud includes a ceiling point cloud and a floor point cloud in addition to the wall point cloud, but a line-based method is used for each room to solve the room reconstruction problem, and thus both the ceiling point cloud and the floor point cloud are not required for the reconstruction of the wall, so the wall point cloud in each room may be retained by segmenting out the ceiling point cloud and the floor point cloud using the point cloud density histogram in fig. 3, as shown in fig. 6a and 6 b.
S31: and performing angle slicing on the wall point cloud according to a preset angle interval to obtain a second distance between the wall point cloud and the central points of all the first point clouds in each room.
S32: and distance slicing is carried out on the wall point cloud according to the second distance and the preset distance interval.
Specifically, after the ceiling and the floor are separated, a process of indoor denoising (fig. 6c) is performed. As shown in fig. 8a, fig. 8a is a dual slicing principle, wherein the central points of all the first point clouds in each room are set as the origin, and first, the two points are spaced at a preset angle interval SθSlicing the wall point cloud to obtain Gthea={groupthea1,groupthea2,……grouptheamTherein, groupthea1Denotes a slice, G, after slicing with the a1theaRepresenting the set of all slices, and calculating a second distance P between the data point in each slice and the originij,disA second distance Pij,disAt a predetermined distance SdSlicing the wall point cloud, wherein the steps S31-S32 are to slice the wall point cloudAnd (5) double slicing.
S33: and removing sparse noise points and point clouds representing a horizontal plane in each room by using the number of the wall point clouds subjected to double slicing.
Specifically, fig. 8b is a wall point cloud after double slicing, and the indoor sparse noise and the point cloud representing the indoor horizontal plane structure are removed by a method of removing noise by the number of points in the slice, so as to obtain the wall point cloud after noise removal (fig. 8 c). Aiming at most indoor shelters and noise points influencing the room reconstruction work, the method solves the problem of noise points which are generated by shelters of objects vertical to the wall and influence the reconstruction work in most rooms by using a double slicing method.
S4: wall boundary lines are obtained from the wall data points, and closed polygons are obtained from the wall boundary lines according to the end point support and the line segment support.
Specifically, in the present embodiment, please refer to fig. 6 and fig. 9 together, and fig. 9 is a schematic flow chart of an embodiment of step S4 in fig. 2. Specifically, step S4 includes:
s40: straight lines are extracted for each wall data point and all the straight lines are combined to obtain at least one straight line set.
Specifically, the set of straight lines includes at least one straight line belonging to the same wall. In this embodiment, as shown in fig. 6e, the projected wall data points are extracted with a random sample consensus algorithm (RANSAC) and optimized to merge lines that approximately belong to the same wall.
S41: and intersecting the straight lines in the straight line set to obtain the wall boundary line.
Specifically, as shown in fig. 6f, all the straight lines are intersected to obtain a line segment, which is a wall boundary line.
S42: and screening out the optimal line segment from the boundary line of the wall according to the end point support degree and the line segment support degree.
In this embodiment, as shown in fig. 6g, an optimal line segment is screened out from the wall boundary line by using a line segment selection principle, where the line segment selection principle is divided into initial selection, check and line segment selection, and the initial selection and the check are based on two parameters for evaluating the importance of the line segment, namely, an endpoint Support and a line segment Support. Specifically, the endpoint Support is the number of segments with the Support of the segment connected to each endpoint being greater than zero, and the Support of the segment is the product of the confidence weighted value confidence of each internal point in the bounding box and the point cloud density of all internal points in the bounding box, which is: support is confidence.
S43: and deleting redundant line segments except the optimal line segment and complementing the missing part of the optimal line segment to obtain a closed polygon.
Specifically, each end point support of the closed polygon is two. In this embodiment, after the optimal line segments are screened out from the wall boundary lines, the redundant line segments except the optimal line segments are deleted step by step, and the missing parts of the optimal line segments are complemented, so that the closed polygon with each end point having the support degree of 2 is finally obtained. Therefore, the accuracy of reconstructing the wall in the room can be ensured, and more accurate three-dimensional model data can be obtained.
Specifically, in the present embodiment, step S4 includes, before: 1. bounding boxes are obtained using the end points of the wall boundary lines. The way of obtaining the bounding box here is consistent with the existing method and will not be described here. Specifically, the bounding box includes at least one interior point P therein. 2. Obtaining a first distance dist (P, L) between the inner point P and the wall boundary linesegi) And a ratio between the first threshold value epsilon, a difference between one and the ratio, and obtaining a product between the difference and the confidence term conf (P) of the inner point P, and taking the sum of the products of all the inner points P in the bounding box as the confidence weight value confidence. The calculation formula of the confidence weighted value confidence is as follows:
specifically, the first distance dist (p, L)segi) Less than or equal to the first threshold epsilon. In this equation, | P | is the total number of wall data points within the bounding box, and point P is the bounding box interior point, dist (P, L)segi) Is each point to line segment LsegiDistance ofFrom, ε is the allowable point to the line segment LsegiThe maximum distance of (c). In this embodiment, the preset length is taken as a distance from a point on two sides of the enclosing box to the line segment, and may be a value such as 0.03, and is not limited herein. Further, the point P satisfies dist (P, L)segi) < ε. The confidence term conf (p) measures the local quality of the wall data point at point p, which is defined by computing the local covariance matrix defined at p, in this work we compute the covariance matrix of p by p in the local neighborhood of three static scales (i.e., different neighborhood sizes). The confidence term conf (p) is defined as:
S5: and stretching the closed polygon to obtain a three-dimensional model of the room.
Specifically, after the closed polygon is obtained in step S43, the closed polygon is stretched into a three-dimensional model, that is, a room reconstruction model, by using the divided ceiling height and floor height.
By the design mode, the slice perpendicular to the ground and the wall of the ceiling is used for fitting the line segment under the condition based on the Manhattan principle, the reconstruction of the wall is carried out by using a new line segment optimization and selection strategy, the slice can reduce the point cloud data volume to be processed, so that the time can be greatly saved, and meanwhile, the line segment optimization and selection strategy ensures the reconstruction accuracy of the wall in the room, so that more accurate three-dimensional model data can be obtained.
Further, in the present embodiment, after the three-dimensional model is obtained, a window or door is reconstructed by using a difference between a horizontal distance and a vertical distance between the point cloud slice and the adjacent point, please refer to fig. 2 and fig. 10, and fig. 10 is an effect diagram of a window or door extraction process. Specifically, step S5 is followed by:
s6: and extracting a second point cloud in the three-dimensional model.
Specifically, the second point cloud belongs to a wall in the three-dimensional model. As shown in fig. 10a, a second point cloud of walls belonging to each room in the three-dimensional model is extracted.
S7: and dividing the second point cloud according to the height information of the three-dimensional model to obtain at least one horizontal slice, and dividing the second point cloud according to the horizontal angle of the point to obtain at least one vertical slice.
Specifically, in the present embodiment, as shown in fig. 10b, the horizontal slice is divided according to the z-coordinate, that is, the height of the wall of the room. In addition, in the present embodiment, please refer to fig. 10c and fig. 11, and fig. 11 is a schematic diagram of vertical slicing. The vertical slice is divided according to the horizontal angle of the point. As shown in fig. 11a, a point a on the vertical line is drawn as the CD vertical line so that AB is equal to CD, and the CD is divided into slices according to an angle threshold (e.g., θ), as shown in fig. 11 b.
S8: a first contour point in a first direction perpendicular to the ground is extracted using the horizontal slice, and a second contour point in a second direction horizontal to the ground is extracted using the vertical slice.
Preferably, in the present embodiment, as shown in fig. 10d, the step of extracting a first contour point in a first direction perpendicular to the ground using a horizontal slice in step S8 may include: 1. sequencing the wall point clouds in the horizontal slice according to the horizontal angle of the second point cloud in the horizontal slice, and obtaining a third distance between the wall point cloud in the horizontal slice and the adjacent wall point cloud; 2. judging whether the third distance meets a second threshold value; 3. if so, taking the adjacent second point cloud as a first contour point; 4. otherwise, returning to the step of judging whether the third distance meets the second threshold value. This determination process is completed until the wall point cloud in the last horizontal slice.
Preferably, in the present embodiment, as shown in fig. 10d, the step of extracting the second contour point in the second direction horizontal to the ground using the vertical slice in step S8 may include: 1. sequencing the wall point clouds in the vertical slice according to the height value of the second point cloud in the vertical slice, and obtaining a fourth distance between the wall point cloud in the vertical slice and the adjacent wall point cloud; 2. judging whether the fourth distance meets a third threshold value; 3. if so, taking the adjacent second point cloud as a second contour point; 4. otherwise, returning to the step of judging whether the fourth distance meets the third threshold value. This determination process is completed until the wall point cloud in the last vertical slice.
S9: clustering the first contour points and the second contour points to obtain at least one door and window contour point, and projecting the door and window contour points to a corresponding wall in the three-dimensional model.
Specifically, as shown in fig. 10e, all the extracted first contour points and second contour points are classified into different door and window contour points by euclidean distance clustering, and are projected onto the corresponding walls in the three-dimensional model.
As shown in fig. 10f, the door/window contour points obtained in step S9 are parameterized, that is, the points are converted into lines, please refer to fig. 10 and 12 together, and fig. 12 is a schematic flow chart of an embodiment after step S9 in fig. 2. Specifically, step S9 is followed by:
s50: and sequencing the first contour points and the second contour points according to the direction of the wall, and obtaining head contour points and tail contour points according to the sequencing result.
Specifically, all the first contour points and the second contour point removal height information are sorted according to the direction of the wall surface, and the head contour points and the tail contour points are obtained according to the sorting result.
S51: and sequencing the first contour points and the second contour points according to the height values, and obtaining the highest contour points and the lowest contour points according to the sequencing result.
Specifically, all the first contour points and all the second contour points are sorted according to the height information, and the highest contour point and the lowest contour point are obtained according to a sorting result;
s52: the leading contour point, the trailing contour point, the highest contour point, and the lowest contour point are first fitted to obtain an initial bounding rectangle.
Specifically, the extracted head contour point and tail contour point represent end points of a rectangle in the horizontal direction, and the highest contour point and the lowest contour point represent height information, and initial fitting is performed to obtain an initial bounding rectangle.
S53: and extracting the door and window contour points close to each edge by using the four edges of the initial surrounding rectangle, and performing quadratic fitting on each edge by using a random sampling consistency algorithm.
S54: and judging whether the graph formed by the four edges after quadratic fitting is a quadrangle or not.
S55: and if so, outputting the initial bounding rectangle.
S56: otherwise, the initial bounding rectangle is discarded.
Specifically, door and window contour points close to each side are extracted by using four sides of an initial surrounding rectangle, then each side is refitted by using a random sample consensus (RANSAC), if a quadrangle cannot be refitted in the refitting process, it is indicated that one of the four sides cannot be refitted due to insufficient points, and the door and window contour points are discarded.
By means of the design mode, aiming at the problem of detail reconstruction of indoor mobile scanning, the wall is extracted by means of room wall reconstruction, then horizontal slicing and vertical slicing are conducted, the door and window contour points are extracted by means of adjacent point distance threshold values, doors and windows are separated in a clustering mode and parameterized, and the method can be used for rapidly and accurately extracting the opening in the wall.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of a three-dimensional reconstruction system according to the present application. The three-dimensional reconstruction system specifically includes:
the obtaining module 10 is configured to obtain three-dimensional point cloud data including a room to be modeled, and perform two-dimensional projection on the three-dimensional point cloud data to form a first image.
A segmentation module 12, coupled to the obtaining module 10, configured to perform room segmentation on the first image and segment the three-dimensional point cloud data according to a room segmentation result of the first image to obtain a first point cloud of each room; wherein the first point cloud comprises a wall point cloud.
The processing module 14 is coupled to the segmentation module 12 and configured to perform two-dimensional projection on the double-sliced wall point cloud to obtain a second image; wherein the second image includes a plurality of wall data points. The processing module 14 is further configured to obtain wall boundary lines from the wall data points, and obtain closed polygons from the wall boundary lines according to the end point support and the line segment support.
And a stretching module 16, coupled to the processing module 14, for stretching the closed polygon to obtain a three-dimensional model of the room.
Referring to fig. 14, fig. 14 is a block diagram of a three-dimensional reconstruction system according to an embodiment of the present invention. The three-dimensional reconstruction system includes a memory 20 and a processor 22 coupled to each other. Specifically, in the present embodiment, the memory 20 stores program instructions, and the processor 22 is configured to execute the program instructions to implement the three-dimensional reconstruction method mentioned in any one of the above embodiments.
Specifically, the processor 22 may also be referred to as a CPU (Central Processing Unit). The processor 22 may be an integrated circuit chip having signal processing capabilities. The Processor 22 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, processor 22 may be commonly implemented by a plurality of integrated circuit chips.
Referring to fig. 15, fig. 15 is a block diagram illustrating a computer-readable storage medium according to an embodiment of the present invention. The computer-readable storage medium 30 stores a computer program 300, which can be read by a computer, and the computer program 300 can be executed by a processor to implement the three-dimensional reconstruction method mentioned in any of the above embodiments. The computer program 300 may be stored in the computer-readable storage medium 30 in the form of a software product, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. The computer-readable storage medium 30 having a storage function may be various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a terminal device, such as a computer, a server, a mobile phone, or a tablet.
In summary, unlike the prior art, the three-dimensional reconstruction method provided by the present application includes: obtaining three-dimensional point cloud data containing a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image; performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms; wherein the first point cloud comprises a wall point cloud; carrying out two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image; wherein the second image comprises a plurality of wall data points; obtaining a wall boundary line from the wall data points, and obtaining a closed polygon from the wall boundary line according to the end point support degree and the line segment support degree; and stretching the closed polygon to obtain a three-dimensional model of the room. Through the design mode, the accuracy of wall reconstruction and door and window reconstruction can be guaranteed, and therefore the accurate three-dimensional reconstruction model of a room is obtained.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (12)
1. A method of three-dimensional reconstruction, comprising:
the method comprises the steps of obtaining three-dimensional point cloud data including a room to be modeled, and performing two-dimensional projection on the three-dimensional point cloud data to form a first image;
performing room segmentation on the first image, and segmenting the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of all rooms; wherein the first point cloud comprises a wall point cloud and a ceiling point cloud;
carrying out two-dimensional projection on the denoised slice of the wall point cloud close to the ceiling point cloud to obtain a second image; wherein the second image includes a plurality of wall data points therein;
obtaining a wall boundary line from the wall data points, and obtaining a closed polygon from the wall boundary line according to the end point support degree and the line segment support degree;
and stretching the closed polygon to obtain a three-dimensional model of the room.
2. The three-dimensional reconstruction method according to claim 1, wherein the step of obtaining wall boundary lines from the wall data points and obtaining closed polygons from the wall boundary lines according to the end point support and the line segment support comprises:
extracting a straight line from each wall data point, and combining all the straight lines to obtain at least one straight line set; wherein the set of straight lines comprises at least one straight line belonging to the same wall;
intersecting straight lines in the straight line set to obtain the wall boundary line;
screening out an optimal line segment from the wall boundary line according to the end point support degree and the line segment support degree; the end point support degree is the number of line segments with the line segment support degree larger than zero and connected with each end point, and the line segment support degree is the product of the reliability weighted value of each internal point in the bounding box and the point cloud density of all the internal points in the bounding box;
deleting redundant line segments other than the optimal line segment and complementing missing parts of the optimal line segment to obtain the closed polygon; wherein each endpoint support of the closed polygon is two.
3. The three-dimensional reconstruction method according to claim 2, wherein the step of selecting the optimal line segment from the wall boundary lines according to the end point support degree and the line segment support degree comprises:
obtaining a bounding box by using the end point of the boundary line of the wall, wherein the bounding box comprises at least one inner point;
obtaining a ratio between a first distance between the inner point and the wall boundary line and a first threshold value, and a difference value between one and the ratio value, obtaining a product between the difference value and a covariance matrix of the inner point, and taking a sum of products of all the inner points in the bounding box as the reliability weighted value; wherein the first distance is less than or equal to the first threshold.
4. The three-dimensional reconstruction method of claim 1, wherein the two-dimensional projection of the denoised slice of the wall point cloud adjacent to the ceiling point cloud to obtain the second image comprises:
segmenting a first point cloud of the rooms to obtain a wall point cloud in each of the rooms;
performing angle slicing on the wall point cloud according to a preset angle interval to obtain a second distance between the wall point cloud and the central point of all the first point clouds in each room;
performing distance slicing on the wall point cloud according to the second distance and a preset distance interval;
and removing sparse noise points and point clouds representing a horizontal plane in each room by using the number of the wall point clouds subjected to double slicing.
5. The three-dimensional reconstruction method of claim 1, wherein said step of stretching said closed polygon to obtain a three-dimensional model of said room is followed by the steps of:
extracting a second point cloud in the three-dimensional model; wherein the second point cloud belongs to a wall in the three-dimensional model;
dividing the second point cloud according to the height information of the three-dimensional model to obtain at least one horizontal slice, and dividing the second point cloud according to the horizontal angle of the point to obtain at least one vertical slice;
extracting a first contour point in a first direction perpendicular to the ground by using the horizontal slice, and extracting a second contour point in a second direction horizontal to the ground by using the vertical slice;
clustering the first contour points and the second contour points to obtain at least one door and window contour point, and projecting the door and window contour points to a corresponding wall in the three-dimensional model.
6. The three-dimensional reconstruction method of claim 5, wherein said step of extracting a first contour point in a first direction perpendicular to the ground using said horizontal slice comprises:
sequencing the wall point clouds in the horizontal slice according to the horizontal angle of the second point cloud in the horizontal slice, and obtaining a third distance between the wall point cloud in the horizontal slice and the adjacent wall point cloud;
in response to the third distance satisfying a second threshold, treat the neighboring second point cloud as the first contour point;
the step of extracting a second contour point in a second direction horizontal to the ground using the vertical slice includes:
sequencing the wall point clouds in the vertical slice according to the height value of the second point cloud in the vertical slice, and obtaining a fourth distance between the wall point cloud in the vertical slice and the adjacent wall point cloud;
in response to the fourth distance satisfying a third threshold, treat the neighboring second point cloud as the second contour point.
7. The three-dimensional reconstruction method according to claim 5, wherein the step of clustering the first contour points and the second contour points to obtain at least one door and window contour point and projecting the door and window contour point onto a corresponding wall in the three-dimensional model comprises:
sequencing the first contour points and the second contour points according to the direction of the wall, and obtaining head contour points and tail contour points according to the sequencing result;
sorting the first contour points and the second contour points according to the height values, and obtaining highest contour points and lowest contour points according to sorting results;
performing primary fitting on the head contour point, the tail contour point, the highest contour point and the lowest contour point to obtain an initial enclosing rectangle;
extracting a door and window contour point close to each side by using the four sides of the initial surrounding rectangle, and performing quadratic fitting on each side by using a random sampling consistency algorithm;
and responding to the graph after quadratic fitting being a quadrangle, and outputting the initial enclosing rectangle.
8. The three-dimensional reconstruction method according to claim 1, wherein the step of performing room segmentation on the first image and performing segmentation on the three-dimensional point cloud data according to the room segmentation result of the first image to obtain first point clouds of respective rooms comprises:
carrying out binarization on the first image to obtain gaps among the rooms, and obtaining the edges of the gaps by utilizing an edge extraction algorithm;
refining the edge by using image expansion and image closing operation;
carrying out Hough transform on the edge to obtain a line segment corresponding to the edge, and optimizing the line segment to merge straight lines belonging to the same wall boundary;
carrying out image subtraction operation on the widened line segments and the first image to obtain a third image; wherein the third image comprises the room and the wall boundary line;
and dividing the rooms according to the wall boundary lines, and separating the first point clouds corresponding to the rooms.
9. The three-dimensional reconstruction method of claim 8, wherein prior to the step of dividing the rooms according to the wall boundary lines and separating the first point clouds corresponding to the rooms, the method further comprises:
and responding to the existence of noise points in the third image, removing the noise points, and entering the step of dividing the rooms according to the wall boundary line and separating the first point clouds corresponding to the rooms.
10. The three-dimensional reconstruction method of claim 1, wherein said two-dimensional projecting the three-dimensional point cloud data to form a first image comprises:
and preprocessing the three-dimensional point cloud data.
11. A three-dimensional reconstruction system comprising a memory and a processor coupled to each other, the memory having stored therein program instructions, the processor being configured to execute the program instructions to implement the three-dimensional reconstruction method of any one of claims 1 to 10.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for implementing the three-dimensional reconstruction method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111320898.7A CN114219909B (en) | 2021-11-09 | 2021-11-09 | Three-dimensional reconstruction method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111320898.7A CN114219909B (en) | 2021-11-09 | 2021-11-09 | Three-dimensional reconstruction method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114219909A true CN114219909A (en) | 2022-03-22 |
CN114219909B CN114219909B (en) | 2024-10-22 |
Family
ID=80696831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111320898.7A Active CN114219909B (en) | 2021-11-09 | 2021-11-09 | Three-dimensional reconstruction method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114219909B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115564673A (en) * | 2022-09-26 | 2023-01-03 | 浙江省测绘科学技术研究院 | Method and system for extracting three-dimensional point cloud underground garage column and automatically generating vector |
CN116777939A (en) * | 2023-06-19 | 2023-09-19 | 上海建工四建集团有限公司 | Automatic house measuring method based on laser SLAM |
CN117496086A (en) * | 2023-10-12 | 2024-02-02 | 南京林业大学 | Indoor geometric reconstruction method for semantic perception |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240281A1 (en) * | 2017-02-22 | 2018-08-23 | Andre R. Vincelette | Systems and methods to create a virtual object or avatar |
US20190206063A1 (en) * | 2017-12-29 | 2019-07-04 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and apparatus for processing point cloud data |
CN109993783A (en) * | 2019-03-25 | 2019-07-09 | 北京航空航天大学 | A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud |
CN110009727A (en) * | 2019-03-08 | 2019-07-12 | 深圳大学 | A kind of indoor threedimensional model automatic reconfiguration method and system with structure semantics |
CN111161267A (en) * | 2019-12-09 | 2020-05-15 | 西安工程大学 | Segmentation method of three-dimensional point cloud model |
CN111710023A (en) * | 2020-06-16 | 2020-09-25 | 武汉称象科技有限公司 | Three-dimensional point cloud data feature point extraction method and application |
CN111860138A (en) * | 2020-06-09 | 2020-10-30 | 中南民族大学 | Three-dimensional point cloud semantic segmentation method and system based on full-fusion network |
CN111915730A (en) * | 2020-07-20 | 2020-11-10 | 北京建筑大学 | Method and system for automatically generating indoor three-dimensional model from point cloud in consideration of semantics |
US11002859B1 (en) * | 2020-02-27 | 2021-05-11 | Tsinghua University | Intelligent vehicle positioning method based on feature point calibration |
-
2021
- 2021-11-09 CN CN202111320898.7A patent/CN114219909B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240281A1 (en) * | 2017-02-22 | 2018-08-23 | Andre R. Vincelette | Systems and methods to create a virtual object or avatar |
US20190206063A1 (en) * | 2017-12-29 | 2019-07-04 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and apparatus for processing point cloud data |
CN110009727A (en) * | 2019-03-08 | 2019-07-12 | 深圳大学 | A kind of indoor threedimensional model automatic reconfiguration method and system with structure semantics |
CN109993783A (en) * | 2019-03-25 | 2019-07-09 | 北京航空航天大学 | A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud |
CN111161267A (en) * | 2019-12-09 | 2020-05-15 | 西安工程大学 | Segmentation method of three-dimensional point cloud model |
US11002859B1 (en) * | 2020-02-27 | 2021-05-11 | Tsinghua University | Intelligent vehicle positioning method based on feature point calibration |
CN111860138A (en) * | 2020-06-09 | 2020-10-30 | 中南民族大学 | Three-dimensional point cloud semantic segmentation method and system based on full-fusion network |
CN111710023A (en) * | 2020-06-16 | 2020-09-25 | 武汉称象科技有限公司 | Three-dimensional point cloud data feature point extraction method and application |
CN111915730A (en) * | 2020-07-20 | 2020-11-10 | 北京建筑大学 | Method and system for automatically generating indoor three-dimensional model from point cloud in consideration of semantics |
Non-Patent Citations (3)
Title |
---|
OSMAN ERVAN 等: "《Downsampling of a 3D LiDAR Point Cloud by a Tensor Voting Based Method》", 《IEEE》, 31 December 2019 (2019-12-31) * |
张宏伟: "《基于遥感图像与点云数据的建筑物三维重建技术研究》", 《中国博士学位论文全文数据库》, 31 December 2018 (2018-12-31) * |
陈永辉: "《基于激光扫描的三维点云数据处理技术研究》", 《中国优秀硕士学位论文全文数据库》, 15 February 2018 (2018-02-15) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115564673A (en) * | 2022-09-26 | 2023-01-03 | 浙江省测绘科学技术研究院 | Method and system for extracting three-dimensional point cloud underground garage column and automatically generating vector |
CN115564673B (en) * | 2022-09-26 | 2024-03-15 | 浙江省测绘科学技术研究院 | Three-dimensional point cloud underground garage column extraction and vector automatic generation method and system |
CN116777939A (en) * | 2023-06-19 | 2023-09-19 | 上海建工四建集团有限公司 | Automatic house measuring method based on laser SLAM |
CN117496086A (en) * | 2023-10-12 | 2024-02-02 | 南京林业大学 | Indoor geometric reconstruction method for semantic perception |
CN117496086B (en) * | 2023-10-12 | 2024-05-07 | 南京林业大学 | Indoor geometric reconstruction method for semantic perception |
Also Published As
Publication number | Publication date |
---|---|
CN114219909B (en) | 2024-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114219909B (en) | Three-dimensional reconstruction method and related device | |
US12051261B2 (en) | Semantic segmentation of 2D floor plans with a pixel-wise classifier | |
EP3506211B1 (en) | Generating 3d models representing buildings | |
Oesau et al. | Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut | |
JP6719457B2 (en) | Method and system for extracting main subject of image | |
Wang et al. | Modeling indoor spaces using decomposition and reconstruction of structural elements | |
CN111915730A (en) | Method and system for automatically generating indoor three-dimensional model from point cloud in consideration of semantics | |
Wang et al. | Feature‐preserving surface reconstruction from unoriented, noisy point data | |
CN102637298A (en) | Color image segmentation method based on Gaussian mixture model and support vector machine | |
EP3073443B1 (en) | 3d saliency map | |
CN107424166B (en) | Point cloud segmentation method and device | |
Aytekin et al. | Visual saliency by extended quantum cuts | |
US20230281350A1 (en) | A Computer Implemented Method of Generating a Parametric Structural Design Model | |
Oesau et al. | Indoor scene reconstruction using primitive-driven space partitioning and graph-cut | |
CN107742113A (en) | One kind is based on the posterior SAR image complex target detection method of destination number | |
CN114359437A (en) | Building structure two-dimensional plane map reconstruction method based on point cloud | |
Eickeler et al. | Adaptive feature-conserving compression for large scale point clouds | |
CN114973057B (en) | Video image detection method and related equipment based on artificial intelligence | |
CN113724267A (en) | Breast ultrasound image tumor segmentation method and device | |
US11004206B2 (en) | Three-dimensional shape expression method and device thereof | |
Quan et al. | Segmentation of tumor ultrasound image via region-based Ncut method | |
Weibel et al. | Robust Sim2Real 3D object classification using graph representations and a deep center voting scheme | |
Iturburu et al. | Towards rapid and automated vulnerability classification of concrete buildings | |
CN118711204B (en) | Building model construction method and system based on AI drawing recognition | |
Túñez-Alcalde et al. | A Top-Down Hierarchical Approach for Automatic Indoor Segmentation and Connectivity Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |