CN115033967A - Building template real-time modeling method based on point cloud data - Google Patents

Building template real-time modeling method based on point cloud data Download PDF

Info

Publication number
CN115033967A
CN115033967A CN202210739217.9A CN202210739217A CN115033967A CN 115033967 A CN115033967 A CN 115033967A CN 202210739217 A CN202210739217 A CN 202210739217A CN 115033967 A CN115033967 A CN 115033967A
Authority
CN
China
Prior art keywords
building
point cloud
image
model
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210739217.9A
Other languages
Chinese (zh)
Inventor
龚光红
王丹
李妮
戚咏劼
李莹
赵耀普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202210739217.9A priority Critical patent/CN115033967A/en
Publication of CN115033967A publication Critical patent/CN115033967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a building template real-time modeling method based on point cloud data, and belongs to the technical field of landscape modeling and surveying and mapping. The method comprises the steps of establishing a building standard model template database by using three-dimensional point cloud; obtaining single building point clouds through point cloud semantic segmentation, filtering and connected domain algorithm, disassembling and identifying components, reading whole parameters and characteristic parameters of the building, matching the obtained information with a template library, obtaining corresponding template files for instantiation editing, integrating the template files into a building instance model after model checking, generating texture images by using original ground view images and point cloud projection images, and finally outputting the three-dimensional model of the building instance containing the texture. On the premise of building modeling by using three-dimensional point cloud, the method has the advantages of high model integrity and high modeling speed, ensures the similarity with a real scene, simultaneously repairs the problems of cavities, deformation and the like of the traditional modeling, and can achieve the real-time modeling effect.

Description

Building template real-time modeling method based on point cloud data
Technical Field
The invention belongs to the technical field of landscape modeling and surveying and mapping, and relates to a building template real-time modeling method based on point cloud data.
Background
The three-dimensional building model construction refers to a technology of shooting and scanning a building with a real scene in multiple angles by using equipment such as a camera and a laser scanner, acquiring information such as an image and a point cloud of the building, and processing the information by using a technical means to generate a building model capable of being displayed in a three-dimensional virtual mode. The modeling based on the parameterized template refers to a technology of reading scene parameters and editing the existing model template according to the acquired parameters to form a three-dimensional model as close to a real scene as possible. Specifically, in the construction of a three-dimensional building model, a template refers to a standardized and fixed file formed by extracting common features of a building.
The point cloud is a massive point set containing surface characteristics of objects in a scene, different attribute information such as three-dimensional coordinates (XYZ), laser reflection intensity, RGB color values and the like is contained according to different generation and collection principles, a patch is generated by performing solution processes such as networking, triangularization and the like on the point cloud in a traditional building modeling method, and therefore a three-dimensional building model file is generated.
Disclosure of Invention
The invention provides a template instantiation construction method based on point cloud data, aiming at the problem of three-dimensional model construction of buildings in a target area.
The invention provides a building template real-time modeling method based on point cloud data, which comprises the following steps:
s1: designing a template file standard structure and a storage format of a universal component, disassembling a building model according to the universal component, respectively modeling the common types of the components, designing a database storage structure of a model library, and storing template files and attribute information of the components of the building;
s2: performing image-based indirect segmentation on the point cloud of the target scene, separating to obtain the point cloud of the building of the target scene, and performing filtering processing on the generated point cloud of the building to eliminate error segmentation points and outliers;
s3: carrying out monomer operation on the building point cloud cluster by using a connected domain algorithm, and segmenting a single building contained in the point cloud cluster;
s4: extracting characteristic parameters of single building point clouds, segmenting the point clouds through a region growing algorithm to obtain plane point clouds of all components, and extracting overall parameters of the building point clouds, commonalities of all components and individual parameters which are called as first characteristic parameters;
s5: matching the first characteristic parameters with template files in the model library to obtain a component template meeting preset conditions, and performing instantiation editing on the obtained component template according to the obtained first characteristic parameters;
s6: assembling each member after instantiation editing, checking the position and size relation and the combination rule, and generating a building instantiation model after checking;
s7: acquiring a two-dimensional original ground scene image and a point cloud projection image corresponding to each component according to the corresponding relation between the planar point cloud of each component and the original oblique photographic image obtained by shooting, and generating a mapping of each surface of each component through image fusion, wherein the mapping is called as a generated mapping;
s8: and carrying out uv expansion on the building example model, generating a texture mapping of the whole building according to the corresponding relation between each surface of the building model and each surface of the generated mapping, realizing the correspondence between the mapping and the model, and outputting the final three-dimensional building example model containing the texture.
Further, the process of constructing the ground object model template database in step S1 is as follows:
s1-1: setting a standard structure of a template file to comprise four parts of a template identifier, an attribute set, an object set and a rule set, wherein the storage format is OSGB, OBJ and FBX;
s1-2: decomposing a building into a roof, a floor, a column beam, a foundation and other five parts of components, respectively establishing corresponding template models for storage, dividing each part of components into main features and secondary features with universality and distinctiveness, and storing and reading template files by taking the features as step-by-step retrieval information;
s1-3: after modeling of each component of the building is completed, an integral storage structure of a template library is designed, template files and extension data are stored in a layered mode, retrieval and reading are carried out on different types of ground object model template files by taking extension type parameters, shape parameters and various types of feature data as indexes, the searching efficiency is improved, a building model template database is established and comprises a roof, a building body, a foundation, a column beam and other five sub-tables, the feature parameters of various components and template file storage paths are stored in each table in a layered mode, and a target template can be quickly obtained by retrieving through the selected feature parameters.
Further, in step S2, the image-based indirect segmentation is performed on the target scene point cloud, the building point cloud is separated, the generated building point cloud is filtered, and an algorithm flow for removing erroneous segmentation points and outliers is as follows:
s2-1: building a neural network to perform semantic segmentation on the ground scene image, and identifying a building area contained in the image;
s2-2: neglecting the influence of a Z axis in the three-dimensional scene point cloud, projecting the point cloud to an XOY coordinate plane, and acquiring a two-dimensional projection image of the point cloud;
s2-3: segmenting the point cloud projection image by using the trained semantic segmentation network, projecting a two-dimensional identification result to the three-dimensional point cloud after identifying a building part, and segmenting the three-dimensional building point cloud by the aid of corresponding geometric dimensions of the two parts;
s2-4: filtering the building point cloud by using an outlier filter, calculating the average distance d from each point in the input point cloud to all points in the k neighborhood of the point cloud, and obtaining an array containing the average distances of all the point neighborhoods, wherein a preset threshold range is set on the assumption that the array accords with Gaussian distribution, d values which are not in the threshold range are allowed to be judged to belong to an outlier set, and points in the outlier set are allowed to be deleted.
So far, the building point cloud cluster is obtained through calculation and is used for the subsequent characteristic extraction and building model instantiation process.
Further, in step S3, a connected domain algorithm is used to perform a singulation operation on the building point cloud cluster, and an algorithm flow for segmenting a single building included in the point cloud cluster is as follows:
s3-1: performing binarization processing on a semantic segmentation image, marking a part identified as a building, detecting a connected domain, performing two-time traversal reading on the image by using a secondary scanning method, giving each non-zero point a digital label for non-zero pixels in the image for the first time, taking a small value of the two labels as the digital label of a current pixel point if the pixels in the left and upper adjacent domains of a certain pixel have label values, and giving a new label value if the pixels in the left and upper adjacent domains of the certain pixel have label values, wherein the condition that the same pixel point has a plurality of different digital labels can occur in the first traversal, so that the digital labels belonging to the same connected domain are combined by performing the second traversal, the pixels in the same connected domain have the same digital label, and eight adjacent points in the horizontal, vertical and diagonal directions of a target point are taken as the adjacent domains to detect each point in the image, detecting and judging the connected domain, and marking the different identified connected domains as different pixel values;
s3-2: and segmenting the building according to the screened image, realizing the coordinate correspondence between the two-dimensional label image and the three-dimensional building point cloud, traversing different pixel values in the label image, carrying out point-by-point judgment and classification on the point cloud, dividing point cloud clusters belonging to the same connected domain into the same building, realizing the singularization of the building point cloud, and obtaining relatively independent single building point cloud data.
Therefore, a plurality of buildings in the same building point cloud cluster are separated, and member segmentation, identification and characteristic information extraction are conveniently carried out on each building in the follow-up process.
Further, the specific algorithm flow of extracting the characteristic parameters of the single building point cloud in the step S4, segmenting the point cloud through a region growing algorithm to obtain the plane point cloud of each member, and extracting the overall parameters of the building point cloud and the commonality and individuality parameters of each member respectively is as follows:
s4-1: firstly, acquiring overall parameters of a building, correctly placing a subsequent building model in a three-dimensional landscape scene, and respectively reading coordinates of a three-dimensional building point cloud and maximum and minimum values xmax, xmin, ymax, ymin, zmax and zmin of an outer bounding box of the point cloud in X, Y, Z coordinate axis directions;
s4-2: using an algorithm based on region growing to disassemble each part of components contained in a building, classifying points with similar properties in the point cloud through neighborhood information so as to divide the point cloud into different regions with certain difference in some aspect, and using curvature as a main basis for distinguishing different building point cloud components to divide the building point cloud to obtain the surface of each component;
s4-3: performing plane fitting on the divided components, calculating length-width ratio, curvature and normal direction information to identify the component types, and further dividing the divided components into roofs, foundations, buildings, pillars and beams and the like;
s4-4: after the disassembly and identification of the building components are completed, extracting the characteristic information corresponding to different components in a classification manner, and firstly extracting the common information of the components, including the point cloud size, position and size for screening effective components;
s4-5: projecting the decomposed point clouds of all building component planes to a pixel coordinate system of a two-dimensional image from a world coordinate system of a three-dimensional scene, acquiring corresponding areas of the point clouds of the building components in an original ground scene image, acquiring actual boundary points by calculating a point cloud convex hull, calculating a corresponding minimum matrix bounding box of the point clouds in the image, acquiring a corresponding minimum matrix area of the image, cutting the area and rotating to be parallel to X, Y coordinate axes, thereby acquiring original ground scene images corresponding to all the point cloud planes;
s4-6: and processing according to the types of the components, and acquiring the individual characteristic information including the types of the components by combining the point clouds of the components and the original landscape images corresponding to the point clouds.
Further, in the step S5, the specific algorithm flow for obtaining a component template meeting the preset condition by matching the first feature parameter with the template file in the model library, and performing instantiation editing on the obtained component template according to the obtained first feature parameter is as follows:
s5-1: for each category of components, searching step by step according to the main features and the secondary features, and matching corresponding template files from a building template database;
s5-2: and instantiating and editing the template file according to the acquired position, size and inclination angle parameters to obtain each building component instance model.
Further, in the step S6, the instantiated and edited members are assembled, and position and size relationship checking and combination rule checking are performed, and a specific algorithm flow for generating the instantiation model of the building after checking is as follows:
s6-1: assembling template files after instantiation is completed according to the coordinates and the size of the actual building components to obtain an initial building instance model;
s6-2: calibrating the size and the position of the building model, and calibrating the geometric position and the size of the building component from bottom to top by taking the three-dimensional coordinates and the geometric size of the foundation as a reference, so that the building component is seamlessly connected in X, Y, Z three directions, and if size conflict occurs, adjusting according to the priority of the Z axis from bottom to top;
s6-3: calibrating the combination and nesting relation of each component of the building model, and checking the modeling rule when each ground feature component is called to confirm that the combination and nesting rules among the components are met;
s6-4: and after finishing size, position adjustment and combination relation check, merging the building components and outputting the merged building components into an instantiated model.
Further, in step S7, the two-dimensional original captured image and the point cloud projection image corresponding to each component are obtained through the corresponding relationship between the planar point cloud of each component and the original tilted captured image, and the specific algorithm flow for generating the surface maps of the components through image fusion is as follows:
s7-1: in step S4, the original oblique photographic image corresponding to the point cloud of each component plane is obtained, and the image is screened to obtain an image with a size and a rotation angle at a given threshold;
s7-2: calculating a rotation matrix through a normal vector, and transforming the point cloud to be parallel to an XOY coordinate plane;
s7-3: resolving the longest edge in the fitting surface of the point cloud component, and rotationally transforming the point cloud to be parallel to the Y axis by taking the longest edge as a reference;
s7-4: sampling the Z value to generate a projection image of the transformed point cloud on an XOY plane;
s7-5: and performing feature point matching and image fusion on the original ground scene image corresponding to the point cloud and the projection image to generate surface texture images of all components with more complete coverage.
Further, in the step S8, uv unfolding is performed on the building example model, and a texture map of the whole building is generated according to the correspondence between each surface of the building model and each surface of the generated map, so as to implement correspondence between the map and the model, and output a final three-dimensional model of the building example with the texture, according to the following specific algorithm flow:
s8-1: carrying out uv expansion on the building example model to generate a corresponding uv expansion diagram;
s8-2: the three-dimensional building point cloud is used as intermediate data to realize the correspondence between each surface in the uv image and the generated mapping, and the texture of each surface of the building is combined into a complete building uv image through an OPENCV image processing library;
s8-3: and editing the building example model, realizing the correspondence between the uv image and the building example model, generating the building model containing the texture image, and outputting the complete building example model.
Compared with the traditional building model construction method based on point cloud, the invention has the advantages that:
(1) automation: the algorithm can automatically realize point cloud feature extraction, calculation and model template matching, and finally integrates the point cloud feature extraction, calculation and model template matching into a building example model and completes checking work without manual operation;
(2) integrity: the problems of cavities, damage, deformation, information loss and the like in the traditional method modeling are solved by instantiating the template model for modeling, and a complete building model can be obtained on the premise of ensuring higher similarity;
(3) rapidity: the template instantiation modeling speed is higher than that of a traditional modeling method for generating a three-dimensional model by point cloud data through a network construction, triangulation and other resolving processes, a real-time modeling effect can be achieved, and a large-range scene building group model can be quickly constructed.
Drawings
In order to illustrate embodiments of the present invention or technical solutions in the prior art more clearly, the drawings which are needed in the embodiments will be briefly described below, so that the features and advantages of the present invention can be understood more clearly by referring to the drawings, which are schematic and should not be construed as limiting the present invention in any way, and for a person skilled in the art, other drawings can be obtained on the basis of these drawings without any inventive effort. Wherein:
FIG. 1 is a flow chart of an exemplary modeling method for a building template based on point cloud data according to an embodiment of the present invention;
FIG. 2 is a model template architecture of a building component;
FIG. 3 is a roof component library configuration;
FIG. 4 is an example of a three-dimensional point cloud with a complete scene;
FIG. 5 is an example of a building point cloud cluster obtained by semantically segmenting a scene point cloud;
FIG. 6 is an example of the effect of filtering the building point cloud;
FIG. 7 is an effect diagram of connected domain detection of building region semantic tags;
FIG. 8 is a schematic diagram of the result of the building point cloud cluster after being singulated;
FIG. 9 is a basic flow of determining component type;
FIG. 10 is a schematic diagram of a geometric relationship of a bounding box of a minimum matrix of a point cloud plane;
FIG. 11 is a model instance generation method;
FIG. 12(a) is an original three-dimensional building point cloud and (b) is a schematic representation of a generated building instance model;
FIG. 13 is a flow chart of an algorithm for generating a texture map of a building model;
fig. 14(a) is a uv expanded image of the building model, and (b) is a texture image generated by texture matching the uv expanded image;
FIG. 15 is a schematic representation of an example model of a building.
Detailed Description
In order to make the technical scheme of the invention clearer, the invention is further explained with reference to the attached drawings. The overall flow chart is shown in fig. 1.
In the process of constructing the three-dimensional model of the building in the target scene, the method extracts the characteristics of the generated three-dimensional point cloud, and performs instantiation editing on the constructed building template file according to the acquired characteristic information to generate the three-dimensional model of the building.
The method provided by the invention can accelerate the model construction speed on the premise of ensuring higher similarity with the building in the real scene, simultaneously improves the problems of building model damage, cavities, distortion and the like caused by point cloud information loss, and is beneficial to quickly and completely constructing the building model in a large-scale scene.
Specifically, the building template instantiated modeling method based on the point cloud data specifically comprises the following steps:
s1: designing a template file standard structure and a storage format of a universal component, disassembling a building model according to the universal component, respectively modeling the common types of the components, designing a database storage structure of a model library, and storing template files and attribute information of the components of the building;
s2: carrying out image-based indirect segmentation on the point cloud of the target scene, separating to obtain the point cloud of the building of the target scene, and carrying out filtering processing on the generated point cloud of the building to eliminate wrong segmentation points and outliers;
s3: carrying out monomer operation on the building point cloud cluster by using a connected domain algorithm, and segmenting a single building contained in the point cloud cluster;
s4: extracting characteristic parameters of single building point clouds, segmenting the point clouds through a region growing algorithm to obtain plane point clouds of all components, and extracting integral parameters of the building point clouds, commonality of all the components and individual parameters which are called first characteristic parameters respectively;
s5: matching the first characteristic parameters with template files in the model library to obtain a component template meeting preset conditions, and performing instantiation editing on the obtained component template according to the obtained first characteristic parameters;
s6: assembling each member after the instantiation edition, checking the position and size relation and the combination rule, and generating a building instantiation model after checking;
s7: acquiring a two-dimensional original ground scene image and a point cloud projection image corresponding to each component according to the corresponding relation between the planar point cloud of each component and the original inclined photographic image obtained by shooting, and generating a chartlet of each surface of the component through image fusion, wherein the chartlet is called a generated chartlet;
s8: and carrying out uv expansion on the building example model, generating a texture mapping of the whole building according to the corresponding relation between each surface of the building model and each surface of the generated mapping, realizing the correspondence between the mapping and the model, and outputting the final three-dimensional building example model containing the texture.
Further, the process of constructing the ground object model template database in step S1 is as follows:
s1-1: setting a basic structure of a template file to comprise four parts, namely a template identifier, an attribute set, an object set and a rule set, wherein the storage format is OSGB, OBJ and FBX;
s1-2: the integral building member model template structure is shown in fig. 2, a building is decomposed into a roof, a floor, a column beam, a foundation and other five parts of members, corresponding template models are respectively established for storage, each part of members are divided into main characteristics and secondary characteristics with universality and distinctiveness, and the template files are stored and read by taking the characteristics as step-by-step retrieval information;
specifically, for each type of component, the vertical surface shape is taken as a main characteristic of the roof, including three types of pitched roofs, flat roofs and curved roofs, the overlooking shape of the roof is taken as a secondary characteristic, including overlooking rectangles, triangles, circles and I-shaped, so that twelve types of roof component models are formed, nine types which are most commonly used in the actual environment are reserved, the overall structure of the roof template library is shown in figure 3, and the complex roof model in the scene is formed by splicing and combining the basic models;
the template model of the building mainly comprises a cubic building template, and the building model construction of a building example is completed by adjusting the size and the direction of the template and nesting and combining different buildings;
the column beam mainly comprises a column and a beam, wherein the column comprises general cylindrical and prismatic shapes, more complex L-shaped columns and complex columns can be formed by combining and nesting, and the beam mainly comprises a rectangular beam, an arched beam, an arc beam and a special-shaped beam which are used as templates to form beam structures with more various forms;
the foundation comprises a single-body-shaped foundation and a combined-shape foundation, wherein the single-body shape is a foundation model formed by a single basic shape, the most common rectangular foundation and circular foundation are selected, and the combined shape is a foundation model formed by splicing two or more basic shapes and comprises an I-shaped foundation, a random foundation and a combined foundation;
in addition, other components such as stairs and fences also exist in the building, and corresponding templates are stored for the components;
s1-3: after modeling of each component of the building is completed, an integral storage structure of a template library is designed, template files and extension data are stored in a layered mode, retrieval and reading are carried out on different types of ground object model template files by taking extension type parameters, shape parameters and various types of feature data as indexes, the searching efficiency is improved, a building model template database is established and comprises a roof, a building body, a foundation, a column beam and other five sub-tables, the feature parameters of various components and the stored paths of the template files are stored in each table in a layered mode, and a target template can be quickly obtained by retrieving through the selected feature parameters.
In step S2, the image-based indirect segmentation is performed on the target scene point cloud, the building point cloud is separated, the generated building point cloud is filtered, and the algorithm flow for removing the erroneous segmentation points and outliers is as follows:
s2-1: building a neural network to perform semantic segmentation on the ground scene image, and identifying a building area contained in the image;
s2-2: neglecting the influence of a Z axis in the point cloud of the three-dimensional scene, projecting the point cloud to an XOY coordinate plane, and acquiring a two-dimensional projection image of the point cloud;
s2-3: segmenting the point cloud projection image by using a trained semantic segmentation network, projecting a two-dimensional identification result to a three-dimensional point cloud after identifying a building part, segmenting the three-dimensional building point cloud by the aid of the geometric dimensions of the two parts, and obtaining a complete three-dimensional scene point cloud example in the figure 4 and a building point cloud example obtained after segmentation in the figure 5;
s2-4: filtering the building point cloud by using an outlier filter, calculating the average distance d from each point in the input point cloud to all points in the k neighborhood of the input point cloud, and obtaining an array containing the average distance of all the point neighborhoods, wherein a preset threshold range is set on the assumption that the array conforms to gaussian distribution, d values which are not in the threshold range are allowed to be judged to belong to an outlier set, points in the outlier set are allowed to be deleted, the threshold value is 1.5 times of the mode value of the average distance d from each point in the point cloud to the points in the k neighborhood, and fig. 6 is an example of the building point cloud after filtering is completed.
So far, the building point cloud cluster is obtained through calculation and is used for the subsequent characteristic extraction and building model instantiation process.
Further, in the step S3, a connected domain algorithm is used to perform a singulation operation on the building point cloud cluster, and an algorithm flow for segmenting a single building included in the point cloud cluster is as follows:
s3-1: performing binarization processing on a semantic segmentation image, marking a part identified as a building, detecting a connected domain, performing two-time traversal reading on the image by using a secondary scanning method, giving each non-zero point a digital label for non-zero pixels in the image for the first time, taking a small value of the two labels as the digital label of a current pixel point if the pixels in the left and upper adjacent domains of a certain pixel have label values, and giving a new label value if the pixels in the left and upper adjacent domains of the certain pixel have label values, wherein the condition that the same pixel point has a plurality of different digital labels can occur in the first traversal, so that the digital labels belonging to the same connected domain are combined by performing the second traversal, the pixels in the same connected domain have the same digital label, and eight adjacent points in the horizontal, vertical and diagonal directions of a target point are taken as the adjacent domains to detect each point in the image, detecting and judging the connected domain, and marking the different identified connected domains as different pixel values;
s3-2: and segmenting the building according to the screened image, realizing the coordinate correspondence between the two-dimensional label image and the three-dimensional building point cloud, traversing different pixel values in the label image, carrying out point-by-point judgment and classification on the point cloud, dividing point cloud clusters belonging to the same connected domain into the same building, realizing the singularization of the building point cloud, and obtaining relatively independent single building point cloud data.
Therefore, a plurality of buildings in the same building point cloud cluster are separated, and member segmentation, identification and characteristic information extraction are conveniently carried out on each building in the follow-up process.
The specific algorithm flow for extracting the characteristic parameters of the single building point cloud in the step S4, obtaining the plane point cloud of each component by dividing the point cloud through a region growing algorithm, and extracting the overall parameters of the building point cloud and the parameters of each component respectively is as follows:
s4-1: firstly, acquiring overall parameters of a building, correctly placing a subsequent building model in a three-dimensional landscape scene, and respectively reading coordinates of a three-dimensional building point cloud and maximum and minimum values xmax, xmin, ymax, ymin, zmax and zmin of an outer bounding box of the point cloud in X, Y, Z coordinate axis directions;
s4-2: using an algorithm based on region growing to disassemble each part of components contained in a building, classifying points with similar properties in the point cloud through neighborhood information so as to divide the point cloud into different regions with certain difference in some aspect, and using curvature as a main basis for distinguishing different building point cloud components to divide the building point cloud to obtain the surface of each component;
s4-3: the method comprises the following steps of carrying out plane fitting on the divided components, calculating length-width ratio, curvature and normal information to identify component types, further dividing the divided components into roofs, foundations, buildings, columns and beams and the like, wherein the specific judgment process is shown in fig. 9:
and (3) calculating the normal vector of the segmented building component by using a calculation method of a Principal Component Analysis (PCA). Firstly, the surface of the point cloud is subjected to normal vector estimation by using a method based on surface fitting, and the point clouds divided by region growing are basically plane-like point clouds with smooth surfaces, so that an approximate plane can be fitted easily. Therefore, first, for each point P in the point cloud, K neighboring points within the nearest neighbor are searched, and then a local plane P is fitted from these points using the least square method. And (3) taking a plane normal vector fitted by K nearest neighbor points as a normal vector of the current calculation point, obtaining the normal vector of the whole plane by a principal component analysis method, decomposing the eigenvalue of the covariance matrix of the following formula aiming at each point in the plane, and solving the eigenvalue of the matrix M, wherein the minimum eigenvalue is the normal vector of the corresponding plane P.
Figure BDA0003714393750000111
The normal vector thus calculated is actually only the straight line where the normal vector is located, and directional calculation is required to determine the actual direction. For any two points n on the fitting plane i 、n j If the two calculated normal vector directions are consistent, then n i ·n j And 1, if the inner product is a negative value, the direction of the normal vector of one of the two points needs to be inverted. Therefore, the direction of the normal vector of a certain point in the fitting plane is specified, then the normal vectors of the adjacent points are calculated, if the inner product of the normal vectors and the adjacent points is negative, the direction of the normal vector is reversed, otherwise, the direction is kept unchanged.
After the plane normal vector is obtained through calculation, the normal vector is used for further calculation and judgment, and the curvature of the building component and the included angle between the component plane and each coordinate axis plane are obtained. Decomposing the characteristic value of the M array in the formula, and if the solved characteristic value meets lambda 0 ≤λ 1 ≤λ 2 The surface curvature of the component plane is then:
Figure BDA0003714393750000112
and calculating the included angle between the normal vector (x, y, z) of the obtained building component and the unit normal vector of the XOY plane, namely obtaining the included angle between the plane and the horizontal plane. Similarly, the included angle between the normal vector and the plane of other coordinate axes is calculated, so that the actual orientation of the plane in the space can be obtained.
After the information is obtained, the foundation, the roof and the building body vertical face can be further judged, and the judgment process is as follows:
(1) calculating an included angle between the plane of the building component and an XOY coordinate plane, setting a reasonable threshold value, and dividing the divided building component into a plane, an inclined plane and a vertical plane;
(2) the foundation part is easy to identify due to the lowest height, and only a plane point cloud with an upward normal vector is needed to be obtained, and the plane with the least difference with the ground elevation in the scene is the foundation;
(3) keeping a basically vertical relation with the XOY plane (the included angle is within a right angle and a set acceptable deviation), and judging as a vertical face forming a building body;
(4) a plane forming an inclined included angle with the XOY plane is judged as a component of the pitched roof;
(5) the component of the plane with the normal vector facing upwards (keeping the basically parallel relation with the XOY plane) and not the foundation part is judged as the component of the flat roof.
The method can further identify the roof, the building body and the foundation plane;
s4-4: after the disassembly and identification of the building components are completed, extracting the characteristic information corresponding to different components in a classification manner, and firstly extracting the common information of the components, including the point cloud size, position and size for screening effective components;
s4-5: projecting the decomposed planar point clouds of the building components to a pixel coordinate system of a two-dimensional image from a world coordinate system of a three-dimensional scene by using a point cloud convex hull concept, acquiring a corresponding area of the point clouds of the building components in an original ground image, acquiring an actual boundary point by calculating the point cloud convex hull, calculating a corresponding minimum matrix bounding box of the point clouds in the image, acquiring a corresponding minimum matrix area of the image, cutting the area, and rotating the area to be parallel to X, Y coordinate axes, thereby acquiring an original ground image corresponding to each point cloud plane;
calculating a corresponding original ground scene image area by using the three-dimensional point cloud, estimating and resolving a camera pose parameter corresponding to each original image by using an associated key point between the images and a focal length parameter of a camera, converting a world coordinate system into a camera imaging coordinate system and an image coordinate system by using an associated relation between the images, and finally projecting the images to a pixel coordinate system used for image calculation, wherein the conversion formula is as follows:
Figure BDA0003714393750000121
the method comprises the steps of substituting actual parameters of each image and a camera into the formula, obtaining actual areas of point cloud data corresponding to different images, using an obtained coordinate extreme value function getMinMax3D in a PCL point cloud processing library, determining extreme values of point clouds of building components in each coordinate axis direction, obtaining the maximum value and the minimum value of the point clouds in the X, Y direction, mapping the maximum value and the minimum value of the point clouds to an original image, obtaining four boundary values xmax, xmin, ymax and ymin, taking (xmin, ymin) as a starting point of image cutting, taking a difference value delta x and delta y between the maximum value and the minimum value on a X, Y axis as cutting scales in the x direction and the y direction, and cutting the original image by using an OPENCV visual library to obtain a plurality of original oblique shooting images corresponding to three-dimensional.
Introducing convex hull concept, firstly, carrying out convex hull detection on the decomposed plane point clouds of each building component, obtaining actual boundary points of the point clouds by the convex hulls, and the boundary point is mapped back to the original landscape image obtained by shooting by the above calculation formula, the calculation process is as shown in fig. 10, firstly, the quadrilateral bounding box corresponding to the component point cloud in the original image is obtained according to the minimum boundary point, and the corresponding minimum matrix bounding box is calculated, cutting the original ground scene image according to the minimum matrix bounding box to obtain an image area corresponding to the point cloud, calculating a rotation center and an angle, rotating the bounding box to be parallel to a coordinate axis, therefore, the cut images are rotated to obtain original ground view images parallel to X, Y coordinate axes, and compared with a traditional extreme value original image obtaining method, the method can more accurately correspond to original image areas and reduce the influence of irrelevant areas in the images;
s4-6: processing according to the types of the components, and acquiring individual characteristic information including the types of the components by combining the point clouds of the components and the corresponding original landscape images;
and constructing different types of image classification and target identification networks according to the original images corresponding to the point clouds, and further identifying the parameters of each component, wherein the overlooking types of the roof are I-shaped, circular, triangular and circular, the cross section shapes of the column beams and the category parameters of other components.
Further, in the step S5, a component template meeting a preset condition is obtained by matching the first feature parameter with a template file in the model library, and the obtained component template is instantiated and edited according to the obtained first feature information, which includes the following specific steps:
s5-1: for each category of components, searching step by step according to the main features and the secondary features, and matching corresponding template files from a building template database;
s5-2: and instantiating and editing the template file according to the acquired position, size and inclination angle parameters to obtain each building component instance model.
Further, in the step S6, the instantiated and edited members are assembled, and position and size relationship checking and combination rule checking are performed, and a specific algorithm flow for generating the instantiation model of the building after checking is as follows:
s6-1: assembling template files after instantiation is completed according to the coordinates and the size of the actual building components to obtain an initial building instance model;
s6-2: calibrating the size and the position of the building model, calibrating the geometric position and the size of the building component from bottom to top by taking the three-dimensional coordinate and the geometric size of the foundation as a reference, enabling the building component to realize seamless connection in X, Y, Z three directions, if the size conflicts, adjusting according to the priority of the Z axis from bottom to top, taking the foundation as the base from bottom to top, preferentially ensuring the position and the size of the building component in the negative direction of the Z axis to be correct, and correspondingly adjusting the building component in the positive direction of the Z axis to eliminate the conflict of the geometric position and the size among the components;
for example, if the actual size of the roof conflicts with the size of a building body, so that the roof and the building body have a non-negligible intersection to influence the modeling effect, the position of the roof in the vertical direction (Z axis) is properly translated on the premise of ensuring the normal size of the roof, and the size and position conflicts of other members are treated in the same way, so that the initially assembled building model can be corrected through the calibration process to obtain a more reasonable and proper building;
s6-3: calibrating the combination and nesting relation of each component of the building model, and checking the modeling rule when each ground feature component is called to confirm that the combination and nesting rules among the components are met;
for example, the roof is generally connected with a building body or other roofs, the building body is generally arranged on the foundation, the vertical direction of the columns is far longer than the horizontal direction, the beam is parallel to the XOY plane and is a longer side, when each ground feature component is called, the modeling rule is required to be checked, the combination and nesting rule among the components is confirmed to be met, so that the construction of an invalid model or an error model can be avoided, and the modeling efficiency is improved;
s6-4: after finishing size, position adjustment and combined relation check, merging and outputting the building components into an instantiated model, wherein in the graph 12(a), original three-dimensional point cloud data of the building point cloud has the problems of some damages, cavities, deformation and the like, and in the graph 12(b), the building model is obtained through template instantiation modeling, so that the high similarity can be kept with the building point cloud, and the problems of the damages, the cavities and the like of the model are avoided.
Further, in the step S7, a specific algorithm flow for generating the surface maps of the components through image fusion is shown in fig. 13, where the specific algorithm flow includes the steps of obtaining the two-dimensional original captured image and the point cloud projection image corresponding to each component according to the corresponding relationship between the planar point cloud of each component and the original tilted captured image, and includes:
s7-1: in step S4, the original oblique photographic image corresponding to the point cloud of each component plane is obtained, and the image is screened to obtain an image with a size and a rotation angle at a given threshold;
s7-2: resolving a rotation matrix through a normal vector, and transforming the point cloud to be parallel to an XOY coordinate plane;
s7-3: resolving the longest edge in the fitting surface of the point cloud component, and rotationally transforming the point cloud to be parallel to the Y axis by taking the longest edge as a reference;
s7-4: generating a projection image of the transformed point cloud on an XOY plane;
s7-5: and performing feature point matching and image fusion on the original ground scene image corresponding to the point cloud and the projection image to generate surface texture images of all components with more complete coverage.
Further, in step S8, uv expansion is performed on the building instance model, a texture map of the whole building is generated according to the correspondence between each face of the building model and each face of the generated map, the correspondence between the map and the model is realized, and a specific algorithm flow for outputting the final three-dimensional building instance model containing the texture is as follows:
s8-1: carrying out uv expansion on the building example model to generate a corresponding uv map, wherein fig. 14(a) is an example of a uv expansion map;
s8-2: the three-dimensional building point cloud is used as intermediate data to realize the correspondence between the uv image and the generated mapping, the mapping of each surface of the building is merged into a complete building uv image through an OPENCV image processing library, the position and the size of each surface in the building model in the uv expanded image are firstly determined, the point cloud of a building component is used as intermediate data to determine the corresponding relation between the molding surface and the texture image of the building model, the generated texture images of each surface are combined and spliced into the uv mapping of the whole building model according to the relation, the size and the arrangement mode of each texture image are adjusted according to the structure and the size of the uv mapping, and material information is given, and the figure 14(b) is the uv expanded image added with the texture image;
s8-3: editing the building example model, realizing the correspondence between the uv image and the building example model, generating the building model containing the texture image, and outputting the complete building example model, wherein fig. 15 is the output texture-containing building model example, and the model and the three-dimensional point cloud data have higher similarity and simultaneously have the advantages of completeness, rapidity and the like.

Claims (9)

1. A building template instantiated modeling method based on point cloud data is characterized by comprising the following steps:
s1: designing a template file standard structure and a storage format of a universal component, disassembling a building model according to the universal component, respectively modeling the common types of the components, designing a database storage structure of a model library, and storing template files and attribute information of the components of the building;
s2: performing image-based indirect segmentation on the point cloud of the target scene, separating to obtain the point cloud of the building of the target scene, and performing filtering processing on the generated point cloud of the building to eliminate error segmentation points and outliers;
s3: carrying out monomer operation on the building point cloud cluster by using a connected domain algorithm, and segmenting a single building contained in the point cloud cluster;
s4: extracting characteristic parameters of single building point clouds, segmenting the point clouds through a region growing algorithm to obtain plane point clouds of all components, and extracting overall parameters of the building point clouds, commonalities of all components and individual parameters which are called as first characteristic parameters;
s5: matching the first characteristic parameters with template files in the model library to obtain a component template meeting preset conditions, and instantiating and editing the obtained component template according to the obtained first characteristic parameters;
s6: assembling each member after the instantiation edition, checking the position and size relation and the combination rule, and generating a building instantiation model after checking;
s7: acquiring a two-dimensional original ground scene image and a point cloud projection image corresponding to each component according to the corresponding relation between the planar point cloud of each component and the original oblique photographic image obtained by shooting, and generating a mapping of each surface of each component through image fusion, wherein the mapping is called as a generated mapping;
s8: and carrying out uv expansion on the building example model, generating a texture mapping of the whole building according to the corresponding relation between each surface of the building model and each surface of the generated mapping, realizing the correspondence between the mapping and the model, and outputting the final three-dimensional building example model containing the texture.
2. The method according to claim 1, wherein the specific process of S1 is as follows:
s1-1: setting a standard structure of a template file to comprise four parts of a template identifier, an attribute set, an object set and a rule set, wherein the storage format is OSGB, OBJ and FBX;
s1-2: decomposing a building into a roof, a floor, a column beam, a foundation and other five parts of components, respectively establishing corresponding template models for storage, dividing each part of components into main features and secondary features with universality and distinctiveness, and storing and reading template files by taking the features as step-by-step retrieval information;
s1-3: after modeling of each component of the building is completed, an integral storage structure of a template library is designed, template files and expansion data are stored in a layered mode, retrieval and reading are carried out on different types of ground object model template files by taking expanded type parameters, shape parameters and various types of characteristic data as indexes, the searching efficiency is improved, a building model template database is established and comprises a roof, a building body, a foundation, a column beam and other five sub-tables, the characteristic parameters of various components and template file storage paths are stored in each table in a layered mode, and a target template can be quickly obtained by retrieving through the selected characteristic parameters.
3. The method according to claim 1, wherein the specific process of S2 is as follows:
s2-1: building a neural network to carry out semantic segmentation on the ground scene image, and identifying a building area contained in the image;
s2-2: neglecting the influence of a Z axis in the point cloud of the three-dimensional scene, projecting the point cloud to an XOY coordinate plane, and acquiring a two-dimensional projection image of the point cloud;
s2-3: segmenting the point cloud projection image by using the trained semantic segmentation network, projecting a two-dimensional identification result to the three-dimensional point cloud after identifying a building part, and segmenting the three-dimensional building point cloud by the aid of corresponding geometric dimensions of the two parts;
s2-4: filtering the building point cloud by using an outlier filter, calculating the average distance d from each point in the input point cloud to all points in k neighborhood of the point cloud, and obtaining an array containing the average distance of all point neighborhoods, wherein a preset threshold range is set on the assumption that the array meets Gaussian distribution, d values out of the threshold range are allowed to be judged to belong to an outlier set, and points in the outlier set are allowed to be deleted.
4. The method according to claim 1, wherein the specific process of S3 is as follows:
s3-1: performing binarization processing on a semantic segmentation image, marking a part identified as a building, detecting a connected domain, performing two-time traversal reading on the image by using a secondary scanning method, giving each non-zero point a digital label for non-zero pixels in the image for the first time, taking a small value of the two labels as the digital label of a current pixel point if the pixels in the left and upper adjacent domains of a certain pixel have label values, and giving a new label value if the pixels in the left and upper adjacent domains of the certain pixel have label values, wherein the condition that the same pixel point has a plurality of different digital labels can occur in the first traversal, so that the digital labels belonging to the same connected domain are combined by performing the second traversal, the pixels in the same connected domain have the same digital label, and eight adjacent points in the horizontal, vertical and diagonal directions of a target point are taken as the adjacent domains to detect each point in the image, detecting and judging the connected domains, and marking the different identified connected domains as different pixel values;
s3-2: and segmenting the building according to the screened image, realizing the coordinate correspondence between the two-dimensional label image and the three-dimensional building point cloud, traversing different pixel values in the label image, carrying out point-by-point judgment and classification on the point cloud, dividing point cloud clusters belonging to the same connected domain into the same building, realizing the singularization of the building point cloud, and obtaining relatively independent single building point cloud data.
5. The method according to claim 1, wherein the specific process of S4 is as follows:
s4-1: firstly, acquiring overall parameters of a building, correctly placing a subsequent building model in a three-dimensional landscape scene, and respectively reading coordinates of a three-dimensional building point cloud and maximum and minimum values xmax, xmin, ymax, ymin, zmax and zmin of an outer bounding box of the point cloud in X, Y, Z coordinate axis directions;
s4-2: using an algorithm based on region growing to disassemble each part of components contained in a building, classifying points with similar properties in the point cloud through neighborhood information so as to divide the point cloud into different regions with certain difference in some aspect, and using curvature as a main basis for distinguishing different building point cloud components to divide the building point cloud to obtain the surface of each component;
s4-3: carrying out plane fitting on the divided components, calculating length-width ratio, curvature and normal information to identify the component types, and further dividing the divided components into roofs, foundations, buildings, columns and beams and the like;
s4-4: after dismantling and identifying the building components, extracting characteristic information corresponding to different components in different categories, and extracting the commonality information of the components, including the point cloud size, position and dimension for screening effective components;
s4-5: projecting the decomposed point clouds of all building component planes to a pixel coordinate system of a two-dimensional image from a world coordinate system of a three-dimensional scene, acquiring corresponding areas of the point clouds of the building components in an original ground scene image, acquiring actual boundary points by calculating a point cloud convex hull, calculating a corresponding minimum matrix bounding box of the point clouds in the image, acquiring a corresponding minimum matrix area of the image, cutting the area and rotating to be parallel to X, Y coordinate axes, thereby acquiring original ground scene images corresponding to all the point cloud planes;
s4-6: and processing according to the types of the components, and acquiring the individual characteristic information including the types of the components by combining the point clouds of the components and the original landscape images corresponding to the point clouds.
6. The method according to claim 1, wherein the specific process of S5 is as follows:
s5-1: for each category of components, searching step by step according to the main features and the secondary features, and matching corresponding template files from a building template database;
s5-2: and instantiating and editing the template file according to the acquired position, size and inclination angle parameters to obtain each building component instance model.
7. The method according to claim 1, wherein the specific process of S6 is as follows:
s6-1: assembling template files after instantiation is completed according to the coordinates and the size of the actual building components to obtain an initial building instance model;
s6-2: calibrating the size and the position of the building model, and calibrating the geometric position and the size of the building component from bottom to top by taking the three-dimensional coordinates and the geometric size of the foundation as a reference, so that the building component is seamlessly connected in X, Y, Z three directions, and if size conflict occurs, adjusting according to the priority of the Z axis from bottom to top;
s6-3: calibrating the combination and nesting relation of each component of the building model, and checking the modeling rule when each ground feature component is called to confirm that the combination and nesting rules among the components are met;
s6-4: and after finishing size, position adjustment and combination relation check, merging the building components and outputting the merged building components into an instantiated model.
8. The method according to claim 1, wherein the specific process of S7 is as follows:
s7-1: in step S4, the original oblique photographic image corresponding to the point cloud of each component plane is obtained, and the image is screened to obtain an image with a size and a rotation angle at a given threshold;
s7-2: calculating a rotation matrix through a normal vector, and transforming the point cloud to be parallel to an XOY coordinate plane;
s7-3: resolving the longest edge in the fitting surface of the point cloud component, and rotationally transforming the point cloud to be parallel to the Y axis by taking the longest edge as a reference;
s7-4: sampling the Z value to generate a projection image of the transformed point cloud on an XOY plane;
s7-5: and performing feature point matching and image fusion on the original ground scene image corresponding to the point cloud and the projection image to generate surface texture images of all components with more complete coverage.
9. The method according to claim 1, wherein the specific process of S8 is as follows:
s8-1: carrying out uv expansion on the building example model to generate a corresponding uv expansion diagram;
s8-2: the three-dimensional building point cloud is used as intermediate data, the correspondence between each surface in the uv image and the generated mapping is realized, and the texture of each surface of the building is combined into a complete building uv image through an OPENCV image processing library;
s8-3: and editing the building example model, realizing the correspondence between the uv image and the building example model, generating the building model containing the texture image, and outputting the complete building example model.
CN202210739217.9A 2022-06-27 2022-06-27 Building template real-time modeling method based on point cloud data Pending CN115033967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210739217.9A CN115033967A (en) 2022-06-27 2022-06-27 Building template real-time modeling method based on point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210739217.9A CN115033967A (en) 2022-06-27 2022-06-27 Building template real-time modeling method based on point cloud data

Publications (1)

Publication Number Publication Date
CN115033967A true CN115033967A (en) 2022-09-09

Family

ID=83127913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210739217.9A Pending CN115033967A (en) 2022-06-27 2022-06-27 Building template real-time modeling method based on point cloud data

Country Status (1)

Country Link
CN (1) CN115033967A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661378A (en) * 2022-12-28 2023-01-31 北京道仪数慧科技有限公司 Building model reconstruction method and system
CN115830262A (en) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 Real scene three-dimensional model establishing method and device based on object segmentation
CN115965759A (en) * 2023-01-04 2023-04-14 浙江柒和环境艺术设计有限公司 Method for building digital modeling by using laser radar
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116258820A (en) * 2023-05-15 2023-06-13 深圳大学 Large-scale urban point cloud data set and building individuation construction method and related device
CN116341050A (en) * 2023-02-07 2023-06-27 浙江大学 Robot intelligent construction method based on point cloud data
CN116485586A (en) * 2023-06-26 2023-07-25 厦门泛卓信息科技有限公司 Intelligent building management method and system based on comprehensive digital platform
CN116895022A (en) * 2023-09-11 2023-10-17 广州蓝图地理信息技术有限公司 Building boundary extraction method based on point cloud data processing
CN116935250A (en) * 2023-07-28 2023-10-24 广州葛洲坝建设工程有限公司 Building template size estimation method based on unmanned aerial vehicle shooting
CN117011413A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium
CN117095143A (en) * 2023-10-19 2023-11-21 腾讯科技(深圳)有限公司 Virtual building construction method, device, electronic equipment and storage medium
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system
CN117830646A (en) * 2024-03-06 2024-04-05 陕西天润科技股份有限公司 Method for rapidly extracting building top elevation based on stereoscopic image

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661378A (en) * 2022-12-28 2023-01-31 北京道仪数慧科技有限公司 Building model reconstruction method and system
CN115661378B (en) * 2022-12-28 2023-03-21 北京道仪数慧科技有限公司 Building model reconstruction method and system
CN115965759B (en) * 2023-01-04 2023-06-16 浙江柒和环境艺术设计有限公司 Method for building digital modeling by using laser radar
CN115965759A (en) * 2023-01-04 2023-04-14 浙江柒和环境艺术设计有限公司 Method for building digital modeling by using laser radar
CN116341050B (en) * 2023-02-07 2024-01-30 浙江大学 Robot intelligent construction method based on point cloud data
CN116246069B (en) * 2023-02-07 2024-01-16 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116341050A (en) * 2023-02-07 2023-06-27 浙江大学 Robot intelligent construction method based on point cloud data
CN115830262A (en) * 2023-02-14 2023-03-21 济南市勘察测绘研究院 Real scene three-dimensional model establishing method and device based on object segmentation
CN116258820B (en) * 2023-05-15 2023-09-08 深圳大学 Large-scale urban point cloud data set and building individuation construction method and related device
CN116258820A (en) * 2023-05-15 2023-06-13 深圳大学 Large-scale urban point cloud data set and building individuation construction method and related device
CN116485586B (en) * 2023-06-26 2023-12-26 厦门泛卓信息科技有限公司 Intelligent building management method and system based on comprehensive digital platform
CN116485586A (en) * 2023-06-26 2023-07-25 厦门泛卓信息科技有限公司 Intelligent building management method and system based on comprehensive digital platform
CN116935250B (en) * 2023-07-28 2024-02-13 广州葛洲坝建设工程有限公司 Building template size estimation method based on unmanned aerial vehicle shooting
CN116935250A (en) * 2023-07-28 2023-10-24 广州葛洲坝建设工程有限公司 Building template size estimation method based on unmanned aerial vehicle shooting
CN116895022A (en) * 2023-09-11 2023-10-17 广州蓝图地理信息技术有限公司 Building boundary extraction method based on point cloud data processing
CN116895022B (en) * 2023-09-11 2023-12-01 广州蓝图地理信息技术有限公司 Building boundary extraction method based on point cloud data processing
CN117011413A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium
CN117011413B (en) * 2023-09-28 2024-01-09 腾讯科技(深圳)有限公司 Road image reconstruction method, device, computer equipment and storage medium
CN117095143A (en) * 2023-10-19 2023-11-21 腾讯科技(深圳)有限公司 Virtual building construction method, device, electronic equipment and storage medium
CN117095143B (en) * 2023-10-19 2024-03-01 腾讯科技(深圳)有限公司 Virtual building construction method, device, electronic equipment and storage medium
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system
CN117495932B (en) * 2023-12-25 2024-04-16 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system
CN117830646A (en) * 2024-03-06 2024-04-05 陕西天润科技股份有限公司 Method for rapidly extracting building top elevation based on stereoscopic image

Similar Documents

Publication Publication Date Title
CN115033967A (en) Building template real-time modeling method based on point cloud data
Liow et al. Use of shadows for extracting buildings in aerial images
Qin et al. Automated reconstruction of parametric bim for bridge based on terrestrial laser scanning data
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
Bassier et al. Comparison of Wall Reconstruction algorithms from Point Cloud Data for as-built BIM
CN117315146B (en) Reconstruction method and storage method of three-dimensional model based on trans-scale multi-source data
CN111027538A (en) Container detection method based on instance segmentation model
CN115272204A (en) Bearing surface scratch detection method based on machine vision
JPH07220090A (en) Object recognition method
CN112396701A (en) Satellite image processing method and device, electronic equipment and computer storage medium
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113344782B (en) Image stitching method and device, storage medium and electronic device
CN114494385A (en) Visual early warning method for water delivery tunnel diseases
Pellis et al. Assembling an image and point cloud dataset for heritage building semantic segmentation
CN116091706B (en) Three-dimensional reconstruction method for multi-mode remote sensing image deep learning matching
JP2003141567A (en) Three-dimensional city model generating device and method of generating three-dimensional city model
CN111242939A (en) Method for identifying state of pressing plate
CN116051771A (en) Automatic photovoltaic BIM roof modeling method based on unmanned aerial vehicle oblique photography model
CN115685237A (en) Multi-mode three-dimensional target detection method and system combining viewing cones and geometric constraints
Dekeyser et al. Cultural heritage recording with laser scanning, computer vision and exploitation of architectural rules
CN114782357A (en) Self-adaptive segmentation system and method for transformer substation scene
Ahmed et al. High-quality building information models (BIMs) using geospatial datasets
Xue et al. Rough registration of BIM element projection for construction progress tracking
CN117079157B (en) Mountain area photovoltaic panel monomer extraction method
CN117953164B (en) Method and system for improving drawing measurement quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination