CN115661398A - Building extraction method, device and equipment for live-action three-dimensional model - Google Patents

Building extraction method, device and equipment for live-action three-dimensional model Download PDF

Info

Publication number
CN115661398A
CN115661398A CN202211223721.XA CN202211223721A CN115661398A CN 115661398 A CN115661398 A CN 115661398A CN 202211223721 A CN202211223721 A CN 202211223721A CN 115661398 A CN115661398 A CN 115661398A
Authority
CN
China
Prior art keywords
building
dimensional model
plane
primitive
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211223721.XA
Other languages
Chinese (zh)
Inventor
乐鹏
于大宇
梁哲恒
庞亚菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Digital Technology Co ltd
Wuhan University WHU
Original Assignee
South Digital Technology Co ltd
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Digital Technology Co ltd, Wuhan University WHU filed Critical South Digital Technology Co ltd
Priority to CN202211223721.XA priority Critical patent/CN115661398A/en
Publication of CN115661398A publication Critical patent/CN115661398A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a building extraction method, a device and equipment for a live-action three-dimensional model. And performing greedy recovery on the building elements deleted by mistake by using topological adjacency relation among the elements on the basis of the incomplete building body structure. The invention realizes the automatic extraction of the building with complete structure and no primitive deformity from the live-action three-dimensional model, has extremely high recall rate and higher automation degree, and greatly shortens the workload and the cost of the objectification and the monomer work of the live-action three-dimensional model.

Description

Building extraction method, device and equipment for live-action three-dimensional model
Technical Field
The invention belongs to the field of surveying and mapping data processing, and particularly relates to a building extraction method, device and equipment for a real-scene three-dimensional model.
Background
With the rapid development of unmanned aerial vehicle technology and optical sensor technology, airborne LiDAR and oblique photogrammetry technologies have enabled low-cost, rapid, accurate acquisition of a wide range of three-dimensional surface information. Especially, the unmanned aerial vehicle oblique photography measurement technology can effectively acquire the coordinate and texture information of the top and the side vertical surface of a building, and is increasingly important in the construction of three-dimensional digital cities and smart cities. The collected three-dimensional earth Surface information exists in a form of point clouds after being processed, and a Digital Ortho Model (DOM), a Digital Surface Model (DSM) and a live-action three-dimensional Model can be further generated by utilizing the point clouds.
Compared with two-dimensional images (such as DSMs and DOM), the real three-dimensional model contains detailed three-dimensional geometric and texture features. Compared with the three-dimensional point cloud, the live-action three-dimensional model has the advantages of space continuity, explicit adjacency and the like, and the disk and the memory occupied by the live-action three-dimensional model are smaller, and some geometrically unrelated points are filtered in the process of generating the live-action three-dimensional model by point cloud reconstruction. Compared with a three-dimensional model in the field of computer graphics, the live-action three-dimensional model represents large-range fine three-dimensional earth surface information and has more primitive quantity. Therefore, live-action three-dimensional models are widely used for various types of 3D geographic applications and spatial analysis.
A great deal of research is carried out on the aspects of ground filtering, surface feature extraction, scene segmentation, space clustering and the like of DSM (digital surface model), DOM (document object model) and point cloud data, however, the research on the work of real-scene three-dimensional model data segmentation, surface feature extraction and the like which are the most important in three-dimensional digital cities is rare. In addition, the full-automatic mechanism of real-scene modeling software such as Photomesh, contextCapture and the like is to construct a continuous and integral grid model, so that the generated real-scene three-dimensional model has a 'one-skin' phenomenon, namely all ground objects are represented by a three-dimensional grid, so that semantic query and analysis are difficult, and diversified application requirements cannot be met. Therefore, the objectification and materialization of the realistic three-dimensional model are an urgent requirement in the construction of three-dimensional digital cities, and a mature realistic three-dimensional model building extraction method does not exist at home and abroad at present.
Building extraction, which has been applied to DOM and point clouds, can be classified into supervised and unsupervised types. Usually, the extraction precision of the supervised method (such as convolutional neural network and graph neural network) is greatly better than that of the unsupervised method, but the supervised method needs a large number of samples, and the labeling process of the samples is extremely time-consuming and labor-consuming. The buildings can not be completely extracted by both a supervision method and an unsupervised method, and the buildings are easy to be incomplete or the boundaries are incomplete. Therefore, the invention provides a building extraction method for a live-action three-dimensional model, which can realize high-precision extraction of buildings of large-range live-action three-dimensional models, ensures the integrity and the regular boundary of the extracted buildings and does not need manual labeling.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a building extraction method, a building extraction device and building extraction equipment for a live-action three-dimensional model, which can realize high-precision extraction of buildings of large-range live-action three-dimensional models, ensure the integrity and the regular boundary of the extracted buildings and do not need manual labeling. The invention is realized by the following technical scheme:
a building extraction method for a live-action three-dimensional model comprises the following steps:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a complete three-dimensional model and analyze the live-action three-dimensional model into a geometric primitive set
Figure BDA0003878095040000021
Step 2, separating the live-action three-dimensional modelGround and non-ground primitives of (2), assembling the primitives
Figure BDA0003878095040000022
Separation into ground primitive sets
Figure BDA0003878095040000023
And non-ground primitive set
Figure BDA0003878095040000024
Step 3, non-ground primitive set
Figure BDA0003878095040000025
Performing over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundaries
Figure BDA0003878095040000026
Step 4, clustering the clusters
Figure BDA0003878095040000027
Performing plane feature detection to quickly generate a set of l planes with regular boundaries
Figure BDA0003878095040000028
Step 5, the plane set is aligned
Figure BDA0003878095040000029
And performing greedy elimination on non-building planes, namely eliminating the non-building planes such as vegetation, urban furniture, trees, vehicles and the like to the maximum extent. In the removing process, on the basis of ensuring the integrity of the main structure of the building, the plane of a part of the building is allowed to be removed by mistake. Getting the plane set of the main structure of the building after eliminating
Figure BDA00038780950400000210
And non-building plane sets
Figure BDA00038780950400000211
Step 6, obtaining a building body structure plane set after greedy elimination
Figure BDA00038780950400000212
All primitives contained
Figure BDA00038780950400000213
Based on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5
Figure BDA00038780950400000214
Thereby obtaining a final set of building elements with integrity taken into account
Figure BDA00038780950400000215
Figure BDA00038780950400000216
Step 7, output or save
Figure BDA00038780950400000217
The three-dimensional model of the building scene formed by
Figure BDA00038780950400000218
And forming a non-building real-scene three-dimensional model.
Further, in the step 2, a ground element and a non-ground element of the real scene three-dimensional model are separated by adopting a material simulation method facing the elements.
Further, the specific implementation manner of step 2 is as follows;
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collecting
Figure BDA00038780950400000219
The Z coordinates of the vertices of all the primitives in (1) take opposite values;
s202: simulating the falling of a cloth material consisting of particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles constituting the cloth material are the highest points of the turned primitives, the initial horizontal position is determined by the cloth material resolution and an outer surrounding box of the real-scene three-dimensional model, and the cloth material slowly falls under the action of gravity;
s203: gradually stopping moving after the particles of the cloth contact the live-action three-dimensional model, realizing the contact of the cloth and the live-action three-dimensional model through collision detection and judgment based on ray intersection, and setting the cloth particles as immovable if the current vertical height of the cloth particles is lower than the collision point of the cloth particles and the live-action three-dimensional model;
s204: finally, the shape of the static cloth is similar to the terrain, then the Euclidean space distance from each element to the cloth is calculated, and if the Euclidean space distance exceeds a set threshold value, the Euclidean space distance is added to the cloth
Figure BDA0003878095040000031
If it is within the threshold value, add it to
Figure BDA0003878095040000032
Further, the non-ground primitive set in step 3
Figure BDA0003878095040000033
An over-segmentation method is carried out, wherein an element-based over-segmentation method can be selected to cluster a plurality of elements into a k cluster set with uniform properties and regular boundaries
Figure BDA0003878095040000034
The method comprises the following steps:
s301, comprehensively considering the space proximity characteristic, the surface characteristic and the color characteristic of the primitive, constructing a primitive heterogeneity distance formula D (p) i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j ). In the formula (I), the compound is shown in the specification,
Figure BDA0003878095040000035
Figure BDA0003878095040000036
and
Figure BDA0003878095040000037
respectively normalized spatial proximity distance, surface feature difference distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 Respectively are the weight factors corresponding to the three,
Figure BDA0003878095040000038
is p i The (q) th vertex of (a),
Figure BDA0003878095040000039
and
Figure BDA00038780950400000310
are each p i And p j Normal vector of (i), i.e.
Figure BDA00038780950400000311
p i And p j Color difference distance D between two primitives c (p i ,p j ) Is calculated in the CIE Lab linear color space,
Figure BDA00038780950400000312
as element p i Average texture color values in CIE Lab space;
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·)
Figure BDA00038780950400000313
Figure BDA00038780950400000314
And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, wherein
Figure BDA00038780950400000315
If r ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-center primitive of =0; j (r) ij ) With the constraint of
Figure BDA00038780950400000316
Wherein I (-) is an exponential function, k represents the number of expected clusters;
s303, constructing and solving an energy optimization function based on the heterogeneous cost function J (-) and the constraint conditions thereof
Figure BDA00038780950400000317
Figure BDA00038780950400000318
Thereby over-dividing the live-action three-dimensional model into a set of k clusters with uniform properties and regular boundaries
Figure BDA00038780950400000319
A bottom-up merging-based energy minimization method may be selected for the solution of the energy equations to
Figure BDA00038780950400000320
Center primitive set
Figure BDA00038780950400000321
Each primitive outside according to the mapping function
Figure BDA00038780950400000322
It is allocated to each cluster, wherein
Figure BDA00038780950400000323
D(p j ,cp i ) As element p j And primitive cp i Heterogeneous distance between each other, and making each primitive
Figure BDA0003878095040000041
Cluster-centric primitives classified therewith
Figure BDA0003878095040000042
The sum of the heterogeneity distance of (a) is minimal.
Further, the bottom-up merging-based energy minimization method, specifically, first optimizes the function E (r) at energy ij ) Add a regularization term, i.e.:
Figure BDA0003878095040000043
in the formula, λ is a regularization parameter, and the initial value of the regularization parameter λ is set as the median of the minimum heterogeneity distance value between each primitive and its adjacent primitive, and then each iteration is increased by two times; at the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
Further, three vertices v of a triangle primitive are used 1 ,v 2 And v 3 The spatial coordinates of (a) compute the normal vector of the primitive:
Figure BDA0003878095040000044
average color texture value
Figure BDA0003878095040000045
The calculating method comprises the following steps: computing primitives p in Adobe RGB color space i In the space range of the y direction and the number of the scanning lines, from top to bottom, for any scanning line, intersecting all edges of the primitive with the scanning line, and sequencing the abscissa obtained by intersection from left to right, wherein the edge intersected for odd times of the scanning line is an incoming edge, and the edge intersected for even times of the scanning line is an outgoing edge; then, the space coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are calculated through interpolation, the UV coordinates of all pixel points in the element are calculated through a gravity center coordinate method, and the U coordinates and the V coordinates are respectively
Figure BDA0003878095040000046
Figure BDA0003878095040000047
Wherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V for pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V for pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 ,v 2 And v 3 UV coordinate of (2), S t =S a +S b +S c (ii) a And after all the UV coordinates are solved, acquiring texture values from texture images corresponding to the primitives by using the UV coordinates, taking the average value of all the texture values as the texture values of the primitives, and then converting the Adobe RGB color space into the CIE Lab color space.
Further, clustering clusters
Figure BDA00038780950400000417
The method for detecting the plane features can select a cluster-based plane feature detection method, thereby quickly generating a set of l planes with regular boundaries
Figure BDA0003878095040000048
The method comprises the following steps:
s401: for the cluster set
Figure BDA0003878095040000049
Selecting a set
Figure BDA00038780950400000410
As a plane
Figure BDA00038780950400000411
And clustering the seed cluster from the set
Figure BDA00038780950400000412
In the middle of removingWherein the plane S m Is made up of a subset of the cluster psi;
s402: computing k around the seed cluster 1 A set of adjacent clusters
Figure BDA00038780950400000413
For
Figure BDA00038780950400000414
According to the similarity criterion, judging whether the adjacent clusters have the same property with the seed cluster, and if the similarity criterion meets the plane similarity criterion, merging the adjacent clusters into a plane S where the seed cluster is located m From the set at the same time
Figure BDA00038780950400000415
Removing the adjacent cluster, and if the plane similarity criterion is not met, not performing any operation on the cluster;
s403: newly incorporating the S402 process into the plane S m As a plane S one by one m Iteratively performing the step S402 until none of the clusters in ψ satisfy the plane similarity criterion;
s404: the S401-S403 processes are executed iteratively until
Figure BDA00038780950400000416
For null, the plane S detected in each iteration is saved m To form a candidate plane feature set S;
s405: post-processing the candidate plane feature set S, wherein the number of clusters in the removed S is less than k 2 The candidate plane of (1).
Further, to the plane set
Figure BDA0003878095040000051
The non-building plane greedy eliminating method can be selected from the following greedy eliminating methods:
s501, green vegetation primitive elimination based on color features: calculating the average value of the texture values of all the primitives in each plane as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively. Then, automatically calculating an optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method (OTSU), and if the exG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
s502, filtering out short terrain primitives based on relative ground elevation: a centroid is calculated for each ground primitive in the set of ground primitives. Calculating relative ground elevation of each element in each plane, i.e. the elevation of centroid of element and its nearest k 3 Difference between average elevation values of centroids of individual ground primitives. Taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 Then it is considered as a low object plane and rejected. At this time, the rest of the elements are almost all buildings;
further, in the greedy restoration process by using the topological adjacency relation among the primitives, a stack-based depth-first search algorithm can be adopted to search all the topologically reachable primitives of each primitive. In addition, to prevent
Figure BDA0003878095040000052
The non-building elements are recovered by error in a large range due to the existence of a very small part of non-building planes, and can be corrected in advance
Figure BDA0003878095040000053
And (3) uniformly partitioning the topological relation in space, and traversing all the topologically reachable primitives of the primitives on the partitions where the primitives are located so as to prevent the non-building primitives from being recovered by excessive errors.
The invention also provides a building extraction device for the live-action three-dimensional model, which comprises the following modules:
the live-action three-dimensional model analysis module: the system is used for inputting a real three-dimensional model, splicing all tiles of the obtained real three-dimensional model into a complete three-dimensional model, and analyzing the real three-dimensional model into a geometric primitive set;
a ground filtering module: the method comprises the steps of inputting parameters of ground filtering and a primitive set, and separating the input primitive set into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameter and element set used for inputting over-segmentation, cluster the element set input into the set of the cluster with uniform property and regular boundary;
a plane feature detection module: inputting parameters and cluster or element sets for plane feature detection by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy rejection module: the system is used for inputting greedy eliminating parameters and non-ground element sets, and eliminating non-building elements such as green vegetation and urban furniture greedy to obtain a set representing the main structure of a building and a non-building plane set;
building element greedy restoration module: the building element selection module is used for acquiring parameters for greedy recovery, representing a set of building body structures and a non-building plane set, performing greedy recovery on the building body structure set based on topological adjacency relations among elements, recovering building elements mistakenly deleted in the non-building plane greedy removing module, and obtaining a final building element set considering integrity;
an output module: and the path used for inputting model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
An electronic device comprising a distributed memory, a processor and a computer program in the memory and executable in the processor, the processor implementing the steps of the building extraction method for live-action three-dimensional models according to the above aspects when executing the computer program.
A computer readable storage medium storing computer software instructions for implementing the steps of a building extraction method for live-action three-dimensional models according to the above-described aspects.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) Aiming at the problem that an effective method is not provided at present for building extraction of a real-scene three-dimensional model. The invention provides a building extraction method for a live-action three-dimensional model, which can realize the rapid and high-precision extraction of buildings.
(2) The method comprehensively utilizes the space proximity characteristic, the curved surface characteristic and the color characteristic of the real-scene three-dimensional model to realize the extraction of the building model, adopts a bidirectional greedy strategy, and greedy rejects non-building elements such as ground green vegetation, urban furniture, trees, vehicles and the like on the basis of the obtained non-ground elements after filtering the ground elements so as to obtain a set representing the body structure of the building. And then, greedy restoration is carried out on the building elements deleted by mistake on the basis of the incomplete building main body structure to obtain a final building model taking the integrity into consideration, the complete extraction of the building is realized, the recall rate of the building is more than 97 percent, the poor visualization and analysis effects caused by the incomplete building are avoided, and the boundary of the provided building is regular.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a building extraction method for a live-action three-dimensional model according to an embodiment of the present invention.
Fig. 2 is a live-action three-dimensional model used in an embodiment of the invention.
FIG. 3 is a set of non-ground primitives after filtering the ground for a live-action three-dimensional model in an embodiment of the present invention.
Fig. 4 is a series of primitive clusters with uniform internal properties generated after local clustering in the embodiment of the present invention.
FIG. 5 is a plane obtained from cluster-based plane feature detection in an embodiment of the present invention.
Fig. 6 is a incomplete building main body structure obtained after green vegetation primitive elimination and low feature primitive extraction are carried out on a plane in the embodiment of the invention.
Fig. 7 is a structural three-dimensional model of a building extracted by using an automatic extraction method of a complete building for a live-action three-dimensional model according to an embodiment of the present invention.
Fig. 8 is a flow chart of a primitive-oriented cloth simulation method according to another embodiment of the present invention.
FIG. 9 is a flow chart of a primitive-based over-segmentation method provided in yet another embodiment of the present invention.
Fig. 10 is a flowchart of a cluster-based planar feature detection method according to another embodiment of the present invention.
Fig. 11 is a block diagram of a building extraction apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Fig. 12 is a block diagram showing the construction of a building extracting apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Fig. 13 is a schematic data processing diagram of a building extraction apparatus for a live-action three-dimensional model according to still another embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. The following description is, therefore, not to be taken in a limiting sense, but is made merely as an exemplification of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the invention will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It should also be understood that, although the invention has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the invention, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
However, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the disclosure in unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a building extraction method for a live-action three-dimensional model, including:
s1, model preprocessing and analysis: reading the live-action three-dimensional model in the osgb or 3D Tiles format as shown in FIG. 2, splicing all Tiles with original resolution of the live-action three-dimensional model into a complete three-dimensional model, analyzing the live-action three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive set
Figure BDA0003878095040000071
n represents the number of primitives read; each element stores the index, vertex coordinate and texture mapping coordinate information;
s2, real-scene three-dimensional model ground filtering: separating ground elements and non-ground elements of the real-scene three-dimensional model by adopting a material distribution simulation method based on elements, and collecting the elements
Figure BDA0003878095040000081
Separation into sets of ground primitives
Figure BDA0003878095040000082
And non-ground primitive set
Figure BDA0003878095040000083
The resulting set of non-ground primitives is shown in fig. 3. Finally, the shape of the static cloth is similar to the terrain, the space distance from each element to the cloth is calculated, and if the space distance exceeds a set threshold value, the space distance is added to the cloth
Figure BDA0003878095040000084
If it is within the threshold value, add it to
Figure BDA0003878095040000085
S3, over-segmentation: using over-segmentation method based on primitive to assemble non-ground primitives
Figure BDA0003878095040000086
Performing over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundaries
Figure BDA0003878095040000087
The over-segmentation result is shown in fig. 4, where the value of the number k of clusters is converted into a cluster resolution R =1 meter, which is convenient to set, i.e.,
Figure BDA0003878095040000088
Figure BDA0003878095040000089
in the formula, X max ,X min ,Y max And Y min Are respectively a set of primitives
Figure BDA00038780950400000810
Maximum and minimum of the X coordinate of (2), and the primitiveCollection
Figure BDA00038780950400000811
Maximum and minimum values of the Y coordinate of (a);
s4, detecting plane features based on the clusters: cluster to cluster
Figure BDA00038780950400000812
Performing plane feature detection to quickly generate a set of l candidate planes with regular boundaries
Figure BDA00038780950400000813
Removing candidate planes without plane features, wherein the number of the removed clusters is less than k 2 The candidate plane of =7 is composed of some clusters having no plane feature. Secondly, removing the candidate planes with the area smaller than the expected value (20 square meters), and taking the remaining candidate planes as a final plane detection result, wherein the plane feature detection result is shown in fig. 5;
s5, removing green vegetation based on color characteristics: the plane obtained through the steps is provided with some interference ground objects such as short vegetation, automobiles, trees and the like besides buildings, and firstly, the main interference ground objects, namely green vegetation, are removed based on color characteristics. Calculating the texture value of all the primitives in each plane, taking the average value as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively. Then, automatically calculating an optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method (OTSU), and if the ExG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
s6, removing short objects based on the relative ground elevation: after the steps, the green vegetation which is the main interference ground object is removed, but small parts of low ground objects such as automobiles, urban furniture and the like still exist, and the low ground objects are filtered by utilizing the relative ground elevation: and (4) calculating the mass center of each ground primitive in the ground primitive set separated in the step (S3), and establishing a KDTree spatial index structure. Calculate each ofThe relative ground elevation of each element in the plane, i.e. the elevation of the centroid of the element and its nearest k 3 Difference between average elevation values of centroids of 10 ground cells, k thereof 3 The adjacent ground primitive centroids are retrieved through a KDTree spatial index structure. Taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 ,k 4 If the height is set to be 2 meters, the height is regarded as a low ground object plane and is removed, and a building main body structure plane set is obtained after the low ground object plane is removed
Figure BDA00038780950400000814
And non-building plane sets
Figure BDA00038780950400000815
The resulting incomplete building body structure is shown in fig. 6;
s7, carrying out greedy recovery on the mistakenly deleted building elements based on topology: after the above steps, the remaining plane is almost only a building, which represents the main structure of the building, but inevitably, a part of the building elements are deleted by mistake in the previous processing procedure, so the deleted building elements are recovered by using the topological adjacency relation, and the concrete procedure is as follows: building body structure plane set obtained by greedy deletion
Figure BDA0003878095040000091
All the elements in the set are
Figure BDA0003878095040000092
The building plane which is rejected by errors contains a primitive set of
Figure BDA0003878095040000093
Thereby obtaining a final set of building elements with integrity taken into account
Figure BDA0003878095040000094
For each
Figure BDA0003878095040000095
Searching a set of primitives topologically contiguous thereto
Figure BDA0003878095040000096
Will be provided with
Figure BDA0003878095040000097
The elements in (1) are marked as building elements, i.e.
Figure BDA0003878095040000098
In the same way, for
Figure BDA0003878095040000099
Each primitive in the tree searches for its topologically contiguous primitives and marks them as buildings, and the search continues recursively until a building is reached
Figure BDA00038780950400000910
Every primitive in the system searches out all the primitives with reachable topology. Wherein, with p i Primitives that are topologically contiguous may be considered to be p i Primitives that share vertices may also be considered to be p i Primitives that share edges; in addition, to prevent
Figure BDA00038780950400000911
The non-building elements caused by the existence of a very small part of non-building elements are recovered by error in a large range
Figure BDA00038780950400000912
The method comprises the steps of uniformly partitioning a topological relation in space, traversing all topology reachable primitives of the primitives on the partitions where the primitives are located, and preventing non-building primitives from being recovered by excessive errors;
s8, outputting a model: output or store by
Figure BDA00038780950400000913
The three-dimensional model of the building scene formed by
Figure BDA00038780950400000914
And forming a non-building real-scene three-dimensional model. The final output building model is shown in fig. 7.
As shown in fig. 8, another embodiment of the present invention provides a primitive-oriented cloth simulation method, including:
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collecting
Figure BDA00038780950400000915
The Z coordinates of the vertexes of all the primitives in (1) take opposite values;
s202: and simulating the falling of a cloth material consisting of particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles constituting the cloth material are the highest points of the turned elements, and the initial horizontal positions are determined by the cloth material resolution (set to be 0.5 m) and an outer enclosure box of the real-scene three-dimensional model. The cloth slowly falls under the action of gravity;
s203: when particles of cloth contact the live-action three-dimensional model, the cloth gradually stops moving, the contact of the cloth and the live-action three-dimensional model is achieved through collision detection judgment based on ray intersection, and the efficiency of collision detection can be improved through a hierarchical bounding Box (BVH) tree structure. If the current vertical height of the cloth particles is lower than the collision point between the cloth particles and the real three-dimensional model, the cloth particles are set to be immovable;
s204: the final stationary cloth morphology is approximated to the topography, and then the Euclidean distance from each element to the cloth is calculated and added if a set threshold (set to 0.5 m) is exceeded
Figure BDA00038780950400000916
If it is within the threshold value, add it to
Figure BDA00038780950400000917
As shown in fig. 9, another embodiment of the present invention also provides a primitive-based over-segmentation method, including:
s301, comprehensively considering the spatial proximity characteristic, the surface characteristic and the color characteristic of the elementCharacterizing, constructing the primitive heterogeneity distance formula D (p) i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j ). In the formula (I), the compound is shown in the specification,
Figure BDA00038780950400000918
Figure BDA00038780950400000919
and
Figure BDA00038780950400000920
respectively normalized surface feature difference distance, spatial proximity distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 The weighting factors are respectively corresponding to the three, and the value range of the three is [0,1 ]]The user can specify the three parameters according to the requirement, and the values are 0.5, 0.2 and 1 in the embodiment respectively.
Figure BDA0003878095040000101
Is p i If the primitive type is triangle primitive, q is maximum 3,
Figure BDA0003878095040000102
and
Figure BDA0003878095040000103
are each p i And p j Normal vector of (i), i.e.
Figure BDA0003878095040000104
p i And p j Color difference distance D between two primitives c (p i ,p j ) Is calculated in the CIE Lab linear color space,
Figure BDA0003878095040000105
as element p i Average texture color values in CIE Lab space;
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003878095040000106
the normal vector calculation method of each element in the system comprises the following steps: taking a triangle primitive as an example, three vertices v of the primitive are used 1 ,v 2 And v 3 The spatial coordinates of (a) compute the normal vector of the primitive:
Figure BDA0003878095040000107
wherein the element p i Average color texture value of
Figure BDA0003878095040000108
The calculation of (c) may be based on a scan line algorithm, specifically: taking the color texture value calculation under Adobe RGB color space as an example, the primitive p is calculated i Spatial extent in the y-direction and number of scan lines. From top to bottom, for any scan line, all edges of the primitive intersect that scan line. And (4) sequencing the abscissa obtained by intersection from left to right, wherein the edge crossed for odd times of the scanning line is an incoming edge, and the edge crossed for even times is an outgoing edge. Then, the spatial coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are interpolated. Calculating the UV coordinates of all pixel points in the element by using a gravity center coordinate method, wherein the U coordinates and the V coordinates are respectively
Figure BDA0003878095040000109
Wherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V for pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V as pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 、v 2 And v 3 UV coordinate of (1), S t =S a +S b +S c . After all UV coordinates are solved, the UV coordinates are used for obtaining texture values from the texture image corresponding to the primitive, and all the texture values are obtainedThe average value of (2) is used as the texture value of the primitive, and then the Adobe RGB color space is converted into CIE Lab color space;
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·)
Figure BDA00038780950400001010
Figure BDA00038780950400001011
And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, wherein
Figure BDA00038780950400001012
If r is ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-central primitive of =0; j (r) ij ) Is subject to the constraint of
Figure BDA00038780950400001013
Where I (·) is an exponential function, taking x as an example, if x =1, I (x) =1, and conversely, I (x) =0; k represents the number of clusters expected;
s303, constructing and solving an energy optimization function based on the heterogeneity cost function J (-) and the constraint condition thereof
Figure BDA00038780950400001014
Figure BDA00038780950400001015
Thereby over-dividing the live-action three-dimensional model into a set of k clusters with uniform properties and regular boundaries
Figure BDA00038780950400001016
A bottom-up merging-based energy minimization method may be selected for the solution of the energy equations to
Figure BDA00038780950400001017
Center primitive set
Figure BDA00038780950400001018
Each primitive outside according to the mapping function
Figure BDA00038780950400001019
It is allocated to each cluster, wherein
Figure BDA00038780950400001020
D(p j ,cp i ) As element p j And primitive cp i Heterogeneous distance between each other, and making each primitive
Figure BDA00038780950400001021
Cluster-centric primitives classified therewith
Figure BDA00038780950400001022
Has a minimum sum of heterogeneity distances of (1);
the energy minimization method based on combination from bottom to top is characterized in that a function E (r) is firstly optimized in energy ij ) Add a regularization term, i.e.:
Figure BDA0003878095040000111
where λ is a regularization parameter, a larger value of λ will result in a smaller deviation of the final cluster number from k, but will reduce the weight of the heterogeneous distance metric. The initial value of the regularization parameter λ is set to the median of the lowest heterogeneity distance values between each primitive and its neighbors, after which each iteration grows by a factor of two. At the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
As shown in fig. 10, another embodiment of the present invention also provides a cluster-based planar feature detection method, including:
s401: for the cluster set
Figure BDA0003878095040000112
Selecting a set
Figure BDA0003878095040000113
As a plane S m And clustering the seed cluster from the set
Figure BDA0003878095040000114
In which the plane S is removed m Is assembled by clusters
Figure BDA0003878095040000115
Is formed from a subset of (a). The present embodiment is a set of pairs
Figure BDA0003878095040000116
Calculates its centroid, then calculates the curvature for all cluster centroids, and pairs from small to large according to the centroid curvature
Figure BDA0003878095040000117
Sorting, then clustering from clusters
Figure BDA0003878095040000118
Sequentially selecting seed clusters;
s402: calculate the set of 8 neighboring clusters around the seed cluster
Figure BDA0003878095040000119
For the
Figure BDA00038780950400001110
According to the cosine similarity measurement criterion, judging whether each adjacent cluster has the same property with the seed cluster, if the cosine similarity measurement criterion is met, the adjacent cluster is judged to have the same property with the seed cluster
Figure BDA00038780950400001111
The neighboring cluster is incorporated into the plane S in which the seed cluster lies m From the set at the same time
Figure BDA00038780950400001112
Removing the neighboring cluster, and if the cosine similarity measure criterion is not satisfied, not performing any operation on the clusterOperating, wherein θ is an angle threshold;
wherein the cosine similarity measure criterion is
Figure BDA00038780950400001113
In the formula (I), the compound is shown in the specification,
Figure BDA00038780950400001114
and
Figure BDA00038780950400001115
are respectively a cluster
Figure BDA00038780950400001116
Hezhou cluster
Figure BDA00038780950400001117
The normal vector of the cluster is calculated in a mode of the average value of the normal vectors of all the elements in the cluster;
s403: newly merging the S402 process into the plane S m As a plane S one by one m Until said step S402 is executed iteratively until a new seed cluster is obtained
Figure BDA00038780950400001118
None of the clusters satisfies a cosine similarity measure criterion;
s404: the process of S401-S403 is iteratively executed until
Figure BDA00038780950400001119
For null, the plane S detected in each iteration is saved m To form a candidate planar feature set S. And (4) post-processing the candidate plane feature set S, removing the candidate planes of which the number of clusters in the S is less than 3, and taking the remaining candidate planes as a final plane detection result.
As shown in fig. 11, another embodiment of the present invention further provides a building extraction apparatus for a live-action three-dimensional model, including:
the live-action three-dimensional model analysis module: the system comprises a three-dimensional input module, a three-dimensional output module, a three-dimensional input module, a three-dimensional output module and a three-dimensional output module, wherein the three-dimensional input module is used for inputting a three-dimensional model of a real scene, splicing all tiles of the obtained three-dimensional model of the real scene into a complete three-dimensional model, and analyzing the three-dimensional model of the real scene into a geometric primitive set;
a ground filtering module: the parameters and primitive set for ground filtering are input, and the input primitive set is separated into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameters and the primitive set for inputting over-segmentation are used for clustering the input primitive set into a cluster set with uniform properties and regular boundaries;
the plane feature detection module: inputting parameters and cluster or element sets for plane feature detection by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy rejection module: the system is used for inputting greedy eliminating parameters and non-ground element sets, and eliminating non-building elements such as green vegetation and urban furniture greedy to obtain a set representing the main structure of a building and a non-building plane set;
building element greedy restoration module: the building element selection module is used for selecting parameters for greedy restoration, representing a set of building body structures and a non-building plane set, carrying out greedy restoration on the building body structure set based on topological adjacency relations among elements, restoring building elements which are mistakenly deleted in the non-building plane greedy removing module, and obtaining a final building element set which takes the integrity into consideration;
an output module: and the path used for inputting the model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
Wherein, the real-scene three-dimensional model analysis module includes:
an input unit: according to a file path of a live-action three-dimensional model input by a user, if the model is stored in a paging LOD (level of detail) tile mode, recursively traversing to obtain all leaf node tiles of the model, namely three-dimensional model tiles with the highest resolution, and splicing the tiles into a complete live-action three-dimensional model;
a model analysis unit: analyzing the spliced live-action three-dimensional model into a geometric primitive set, and storing information such as indexes, vertex coordinates and texture mapping coordinates of each primitive;
an output unit: and outputting the analyzed primitive set.
A floor filtration module comprising:
an input unit: the geometric primitive set is used for inputting parameters of ground filtering and a real three-dimensional model;
a calculation unit: according to the input parameters and primitive set, executing ground filtering calculation;
an output unit: and outputting the calculated ground primitive set and the non-ground primitive set.
An over-segmentation module comprising:
an input unit: the geometric primitive set which is used for inputting over-segmented parameters and is not on the ground is input;
a calculation unit: performing over-segmentation calculation according to the input parameters and the primitive set;
an output unit: and outputting a cluster set with uniform calculated properties and regular boundaries.
A planar feature detection module comprising:
an input unit: a set of parameters and clusters for inputting the plane feature detection module;
a calculation unit: according to the input parameters and the primitive set, performing plane feature detection calculation;
an output unit: and outputting the calculated plane feature set with regular boundaries.
A non-building planar greedy culling module, comprising:
an input unit: the parameter, the plane set and the ground primitive set are used for inputting greedy elimination;
a calculation unit: according to the input parameters and the primitive set, executing green vegetation elimination based on color features and low land feature elimination calculation based on relative ground elevation to obtain a primitive set representing the main structure of the building;
an output unit: and outputting the calculated primitive set representing the building main body structure.
A building cell greedy restoration module, comprising:
an input unit: the method comprises the steps of inputting parameters for greedy recovery, a primitive set representing a building body structure and a non-building primitive set;
a calculation unit: executing building element greedy restoration calculation based on topological adjacency relation according to the input parameters and element set to obtain a final building geometric element with a complete structure;
an output unit: and outputting the building geometric primitive with complete structure.
As shown in fig. 12 and 13, another embodiment of the present invention further provides a building extraction apparatus for a live-action three-dimensional model, the apparatus includes a distributed memory, a processor and a computer program in the memory and executable in the processor, the processor when executing the computer program realizes the steps of the building extraction method for a live-action three-dimensional model, including:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a complete three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive set
Figure BDA0003878095040000131
Step 2, separating the ground elements and non-ground elements of the real three-dimensional model, and collecting the elements
Figure BDA0003878095040000132
Separation into sets of ground primitives
Figure BDA0003878095040000133
And non-ground primitive set
Figure BDA0003878095040000134
Step 3, non-ground primitive set
Figure BDA0003878095040000135
Performing over-segmentation to cluster a large number of primitives into individual propertiesSet of symmetric, edge-normalized k clusters
Figure BDA0003878095040000136
Step 4, clustering the clusters
Figure BDA0003878095040000137
Performing plane feature detection to rapidly generate a set of l planes with regular boundaries
Figure BDA0003878095040000138
Step 5, the plane set is aligned
Figure BDA0003878095040000139
And performing greedy elimination on non-building planes, namely eliminating the non-building planes such as vegetation, urban furniture, trees, vehicles and the like to the maximum extent. In the removing process, on the basis of ensuring the integrity of the main structure of the building, the plane of a part of the building is allowed to be removed by mistake. Obtaining a building main body structure plane set after elimination
Figure BDA00038780950400001310
And non-building plane sets
Figure BDA00038780950400001311
Step 6, obtaining a building body structure plane set after greedy elimination
Figure BDA00038780950400001312
All primitives contained
Figure BDA00038780950400001313
Based on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5
Figure BDA00038780950400001314
Thereby obtainingFinal integrity-considered building element set
Figure BDA00038780950400001315
Figure BDA00038780950400001316
Step 7, output or save
Figure BDA00038780950400001317
The three-dimensional model of the building scene formed by
Figure BDA00038780950400001318
And forming a non-building real-scene three-dimensional model.
It should be noted that: the building extraction apparatus for a realistic three-dimensional model provided in the above embodiments is only exemplified by the division of the above program modules when performing building extraction for a realistic three-dimensional model, and in practical applications, the above processing allocation may be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules to complete all or part of the above-described processing. In addition, the building extraction device for the live-action three-dimensional model and the building extraction method for the live-action three-dimensional model provided in the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and will not be described herein again.
The memory in embodiments of the present invention is used to store various types of data to support the operation of the building extraction apparatus for live-action three-dimensional models. Examples of such data include: any computer program for operating on a building extraction electronic device for a live-action three-dimensional model.
The building extraction method for the live-action three-dimensional model disclosed by the embodiment of the invention can be applied to a processor or realized by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the building extraction method for the live-action three-dimensional model may be implemented by instructions in the form of integrated logic circuits of hardware or software in the processor. The processor may be a general purpose processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the memory, and the processor reads the information in the memory, and completes the steps of the building extraction method for the live-action three-dimensional model provided in the embodiment of the present invention in combination with hardware thereof.
The building extraction Device for the live-action three-dimensional model in the exemplary embodiment may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
It will be appreciated that the memory can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The described memory for embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
In an exemplary embodiment, the embodiment of the present invention further provides a storage medium, specifically a computer storage medium, which may be a computer readable storage medium, for example, a memory storing a computer program, which is executable by a processor of a building extraction apparatus for a realistic three-dimensional model, and the steps of the building extraction method for a realistic three-dimensional model. The computer readable storage medium may be a ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM, among others.
In conclusion, the building extraction method realizes automatic extraction of the building from the live-action three-dimensional model, the extracted building has a complete structure and no primitive defects, the building has extremely high recall rate and high automation degree, and the objectification and the individuation manual workload and the cost of the live-action three-dimensional model are greatly reduced.
It should be understood that parts of the specification not set forth in detail are of the prior art.
It should be understood that the above description is illustrative of embodiments and is not to be construed as limiting the scope of the invention, which is defined by the appended claims. Without departing from the scope of the invention as defined in the claims. Any modification, equivalent replacement, improvement and the like made by the method fall into the protection scope of the invention, and the protection scope of the invention is subject to the appended claims.

Claims (12)

1. A building extraction method for a live-action three-dimensional model is characterized by comprising the following steps:
step 1, preprocessing and analyzing the live-action three-dimensional model to splice all tiles of the live-action three-dimensional model into a three-dimensional model, and analyzing the live-action three-dimensional model into a geometric primitive set
Figure FDA0003878095030000011
Step 2, separating ground elements and non-ground elements of the live-action three-dimensional model, and collecting the elements
Figure FDA0003878095030000012
Separation into sets of ground primitives
Figure FDA0003878095030000013
And non-ground primitive set
Figure FDA0003878095030000014
Step 3, for non-ground primitive set
Figure FDA0003878095030000015
Performing over-segmentation to cluster a plurality of primitives into a set of k clusters with uniform properties and regular boundaries
Figure FDA0003878095030000016
Step 4, clustering the clusters
Figure FDA0003878095030000017
Performing plane feature detection to rapidly generate a set of l planes with regular boundaries
Figure FDA0003878095030000018
Step 5, the plane set is aligned
Figure FDA0003878095030000019
Carrying out greedy elimination on non-building planes to obtain a building body structure plane set
Figure FDA00038780950300000110
And non-building plane sets
Figure FDA00038780950300000111
Step 6, obtaining a building body structure plane set after greedy elimination
Figure FDA00038780950300000112
All primitives contained
Figure FDA00038780950300000113
Based on the above, greedy recovery is performed by using topological adjacency relation between primitives, so as to recover the primitive set contained in the building plane that is mistakenly eliminated in the step 5
Figure FDA00038780950300000114
Thereby obtaining a final set of building elements with integrity taken into account
Figure FDA00038780950300000115
Figure FDA00038780950300000116
Step 7, output or save
Figure FDA00038780950300000117
The three-dimensional model of the building scene formed by
Figure FDA00038780950300000118
And forming a non-building real-scene three-dimensional model.
2. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: step 2, separating ground elements and non-ground elements of the live-action three-dimensional model by adopting an element-oriented cloth simulation method, wherein the specific implementation mode is as follows;
s201: vertically turning the live-action three-dimensional model along the Z coordinate direction, i.e. collecting
Figure FDA00038780950300000119
The Z coordinates of the vertexes of all the primitives in (1) take opposite values;
s202: simulating the falling of a cloth material formed by particles above the turned real-scene three-dimensional model, wherein the initial heights of all the particles forming the cloth material are the highest points of the elements after turning, the initial horizontal position is determined by the cloth material resolution and an outer enclosure box of the real-scene three-dimensional model, and the cloth material slowly falls under the action of gravity;
s203: gradually stopping moving after the particles of the cloth contact the live-action three-dimensional model, realizing the contact of the cloth and the live-action three-dimensional model through collision detection and judgment based on ray intersection, and setting the cloth particles as immovable if the current vertical height of the cloth particles is lower than the collision point of the cloth particles and the live-action three-dimensional model;
s204: finally, the shape of the static cloth is similar to the terrain, then the Euclidean space distance from each element to the cloth is calculated, and if the Euclidean space distance exceeds a set threshold value, the Euclidean space distance is added to the cloth
Figure FDA00038780950300000120
If it is within the threshold value, add it to
Figure FDA00038780950300000121
3. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 3 is as follows;
s301, comprehensively considering the spatial proximity characteristic, the surface characteristic and the color characteristic of the element, and constructing an element heterogeneity distance formula D (·);
s302, constructing a heterogeneous cost function of the cluster based on the primitive heterogeneous distance formula D (·)
Figure FDA0003878095030000021
Figure FDA0003878095030000022
And determining constraint conditions thereof for judging the sum of heterogeneous costs of all clusters, wherein
Figure FDA0003878095030000023
If r ij =1 representing primitive p i The center primitive of a cluster can be represented and this cluster contains all the satisfied r ij Non-center primitive of =0; j (r) ij ) With the constraint of
Figure FDA0003878095030000024
Wherein I (-) is an exponential function, k represents the number of expected clusters; d (p) i ,p j ) Representing the ith primitive p i And the jth primitive p j A heterogeneity distance formula between the elements, wherein n is the number of the elements;
s303, constructing and solving an energy optimization function based on the heterogeneity cost function J (-) and the constraint condition thereof
Figure FDA0003878095030000025
Figure FDA0003878095030000026
Thereby over-dividing the live-action three-dimensional model into a set of k clusters with uniform properties and regular boundaries
Figure FDA0003878095030000027
Figure FDA0003878095030000028
4. The building extraction method for the live-action three-dimensional model according to claim 3, characterized in that: the primitive heterogeneity distance formula D (-) is calculated as follows;
D(p i ,p j )=μ 1 D s (p i ,p j )+μ 2 D e (p i ,p j )+μ 3 D c (p i ,p j )
in the formula (I), the compound is shown in the specification,
Figure FDA0003878095030000029
and
Figure FDA00038780950300000210
Figure FDA00038780950300000211
respectively normalized surface feature difference distance, spatial proximity distance and color difference distance, mu, between two elements 1 、μ 2 And mu 3 The weight factors are respectively corresponding to the three, and the value interval of the three is [0,1 ]];
Figure FDA00038780950300000212
Is p i If the primitive type is triangle primitive, thenThe maximum q is 3,
Figure FDA00038780950300000213
and
Figure FDA00038780950300000214
are each p i And p j Normal vector of (i), i.e.
Figure FDA00038780950300000215
p i And p j Color difference distance D between two primitives c (p i ,p j ) Calculated in the CIE Lab linear color space,
Figure FDA00038780950300000216
as element p i Mean texture color values in CIE Lab space.
5. The building extraction method for the live-action three-dimensional model according to claim 4, characterized in that: using three vertices v of a triangle primitive 1 ,v 2 And v 3 The spatial coordinates of (a) calculate the normal vector of the primitive:
Figure FDA00038780950300000217
average color texture value
Figure FDA00038780950300000218
The calculating method comprises the following steps: computing primitives p in Adobe RGB color space i In the space range of the y direction and the number of the scanning lines, from top to bottom, for any scanning line, intersecting all edges of the primitive with the scanning line, and sequencing the abscissa obtained by intersection from left to right, wherein the edge intersected for odd times of the scanning line is an incoming edge, and the edge intersected for even times of the scanning line is an outgoing edge; then, the space coordinates of the pixels on the scanning line between the incoming edge and the outgoing edge are calculated through interpolation, the UV coordinates of all pixel points in the element are calculated through a gravity center coordinate method, and the U coordinates and the V coordinates are respectively
Figure FDA0003878095030000031
Figure FDA0003878095030000032
Wherein S a V being the coordinates of a pixel point and the primitive 1 And v 2 Area of triangle formed by vertexes, S b V as pixel points and primitives 1 And v 3 Area of triangle formed by vertexes, S c V as pixel points and primitives 2 And v 3 Area of triangle formed by vertices, U 1 、U 2 、U 3 、V 1 、V 2 And V 3 Are each v 1 ,v 2 And v 3 UV coordinate of (2), S t =S a +S b +S c (ii) a And after all the UV coordinates are solved, acquiring texture values from texture images corresponding to the primitives by using the UV coordinates, taking the average value of all the texture values as the texture values of the primitives, and then converting the Adobe RGB color space into the CIE Lab color space.
6. The building extraction method for the live-action three-dimensional model according to claim 3, characterized in that: solving an energy optimization function by using a bottom-up energy minimization method based on combination to solve
Figure FDA0003878095030000033
Center primitive set
Figure FDA0003878095030000034
Each primitive outside according to the mapping function
Figure FDA0003878095030000035
It is allocated to each cluster, wherein
Figure FDA0003878095030000036
Figure FDA0003878095030000037
D(p j ,cp i ) Is a primitive p j And primitive cp i Heterogeneous distance between them, and having each primitive
Figure FDA0003878095030000038
Cluster-centric primitives classified therewith
Figure FDA0003878095030000039
Has a minimum sum of heterogeneity distances of (1);
the bottom-up merging-based energy minimization method is implemented by first optimizing a function E (r) in energy ij ) Add a regularization term, namely:
Figure FDA00038780950300000310
in the formula, λ is a regularization parameter, and the initial value of the regularization parameter λ is set to the median of the lowest heterogeneity distance values between each primitive and its neighboring primitives, and then each iteration is increased by two times; at the beginning, all the primitives are set as the central primitive of the cluster, and the central primitives are combined from bottom to top continuously until the number of the clusters is reduced to k.
7. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 4 is as follows;
s401: cluster to cluster
Figure FDA00038780950300000311
Selecting a set
Figure FDA00038780950300000312
As a plane S m And clustering the seed cluster from the set
Figure FDA00038780950300000313
In which is removed, whereinPlane S m Is assembled by clusters
Figure FDA00038780950300000314
A subset of (2);
in S401, the sets are paired
Figure FDA00038780950300000315
Calculates its centroid, then calculates the curvature for all cluster centroids, and pairs from small to large according to the centroid curvature
Figure FDA00038780950300000316
Sorting, then clustering from clusters
Figure FDA00038780950300000317
Sequentially selecting seed clusters;
s402: calculate the set of n1 neighboring clusters around the seed cluster
Figure FDA00038780950300000318
For the
Figure FDA00038780950300000319
According to the cosine similarity measurement criterion, judging whether each adjacent cluster has the same property with the seed cluster, if the cosine similarity measurement criterion is met, the adjacent cluster is judged to have the same property with the seed cluster
Figure FDA00038780950300000320
Figure FDA00038780950300000321
The neighboring cluster is merged into the plane S in which the seed cluster lies m Simultaneously from the set
Figure FDA00038780950300000322
Removing the neighboring cluster, if the cosine similarity measure criterion is not satisfied, not performing any operation on the cluster, where θ is an angle threshold,
Figure FDA00038780950300000323
represents a cluster, D s Representing a cosine similarity measure;
s403: newly merging the S402 process into the plane S m As a plane S one by one m Until said step S402 is executed iteratively until a new seed cluster is obtained
Figure FDA0003878095030000041
None of the clusters satisfies a cosine similarity measure criterion;
s404: the process of S401-S403 is iteratively executed until
Figure FDA0003878095030000042
For null, the plane S detected in each iteration is saved m And forming a candidate plane feature set S, carrying out post-processing on the candidate plane feature set S, removing the candidate planes of which the number of clusters in S is less than n2, and taking the remaining candidate planes as a final plane detection result.
8. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 5 is as follows;
firstly, the green vegetation is removed based on color characteristics, and the method specifically comprises the following steps: calculating the texture value of all the primitives in each plane, taking the average value as the texture value of the plane, and calculating the over-green and over-red index of each plane: exG-ExR =3g-2.4r-b, where r, g and b are the color components of the plane, respectively; then, automatically calculating the optimal elimination threshold t of the ExG-ExR by using a maximum inter-class variance method, and if the ExG-ExR of the plane is greater than the threshold t, regarding the plane as a green vegetation plane and eliminating the green vegetation plane;
then, the height of the relative ground is utilized to filter out short objects, which specifically comprises the following steps: calculating the mass center of each ground element in the ground element set separated in the step 3, and establishing a KDTree spatial index structure; calculating the relative ground elevation value of each element in each plane, namely the element mass center elevation valueK nearest thereto 3 Difference between average elevation values of centroids of individual ground elements, k 3 The mass center of the adjacent ground element is retrieved through a KDTree spatial index structure; taking the maximum value of the relative ground elevations of all elements in a plane as the relative ground elevation value of the plane, and if the relative ground elevation value of the plane is less than a threshold value k 4 Then, the building main body structure plane set is obtained by regarding the building main body structure plane as a low ground object plane, eliminating the low ground object plane and obtaining the building main body structure plane set after eliminating the low ground object plane
Figure FDA0003878095030000043
And non-building plane sets
Figure FDA0003878095030000044
9. The building extraction method for the live-action three-dimensional model according to claim 1, characterized in that: the specific implementation manner of the step 6 is as follows;
for each
Figure FDA0003878095030000045
Searching a set of primitives topologically contiguous thereto
Figure FDA0003878095030000046
Will be provided with
Figure FDA0003878095030000047
The elements in (1) are marked as building elements, i.e.
Figure FDA0003878095030000048
In the same way, for
Figure FDA0003878095030000049
Each primitive in the tree searches for its topologically contiguous primitives and marks them as buildings, and the search continues recursively until a building is reached
Figure FDA00038780950300000410
Searching out all primitives with reachable topology from each primitive; wherein, with p i Topological adjacent primitive is p i Primitives sharing a vertex, or with p i Primitives that share edges; to prevent from
Figure FDA00038780950300000411
Non-building elements are largely recovered by error due to the existence of a very small part of non-building elements
Figure FDA00038780950300000412
The method comprises the steps of uniformly partitioning the topological relation in space, and traversing all topological reachable primitives of the primitives on the partitions where the primitives are located so as to prevent the non-building primitives from being recovered by excessive errors.
10. A building extraction device for a live-action three-dimensional model is characterized by comprising the following modules:
the live-action three-dimensional model analysis module: the system comprises a three-dimensional model input module, a three-dimensional model output module, a three-dimensional model input module, a three-dimensional model output module and a three-dimensional model output module, wherein the three-dimensional model is used for inputting a real three-dimensional model, splicing all tiles of the obtained real three-dimensional model into one three-dimensional model, and analyzing the real three-dimensional model into a geometric primitive set;
a ground filtering module: the method comprises the steps of inputting parameters of ground filtering and a primitive set, and separating the input primitive set into a ground primitive set and a non-ground primitive set;
an over-segmentation module: the parameters and the primitive set for inputting over-segmentation are used for clustering the input primitive set into a cluster set with uniform properties and regular boundaries;
the plane feature detection module: inputting parameters and cluster or element sets detected by plane features by a user, and clustering the input cluster sets or element sets into plane sets with regular boundaries;
non-building plane greedy culling module: the system comprises a data processing unit, a data processing unit and a data processing unit, wherein the data processing unit is used for inputting a greedy rejection parameter and a non-ground primitive set, and greedy rejecting non-building primitives to obtain a set representing a building body structure and a non-building plane set;
building element greedy recovery module: the building element recovery method comprises the steps of inputting parameters for greedy recovery, a set representing a building body structure and a non-building plane set, performing greedy recovery on the building body structure set based on topological adjacency relations among elements, recovering building elements mistakenly deleted in a non-building plane greedy removing module, and obtaining a final consideration building element set;
an input module: and the path used for inputting model storage and outputting the building realistic three-dimensional model and the non-building realistic three-dimensional model.
11. An electronic device comprising a distributed memory, a processor, and a computer program executable in the processor in the memory, characterized in that: the processor, when executing the computer program, performs the steps of a building extraction method for live-action three-dimensional models as claimed in any one of claims 1 to 9.
12. A computer readable storage medium storing computer software instructions, characterized in that: the computer software instructions implementing the steps of a building extraction method for live-action three-dimensional models as claimed in any one of claims 1 to 9.
CN202211223721.XA 2022-10-08 2022-10-08 Building extraction method, device and equipment for live-action three-dimensional model Pending CN115661398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211223721.XA CN115661398A (en) 2022-10-08 2022-10-08 Building extraction method, device and equipment for live-action three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211223721.XA CN115661398A (en) 2022-10-08 2022-10-08 Building extraction method, device and equipment for live-action three-dimensional model

Publications (1)

Publication Number Publication Date
CN115661398A true CN115661398A (en) 2023-01-31

Family

ID=84985545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211223721.XA Pending CN115661398A (en) 2022-10-08 2022-10-08 Building extraction method, device and equipment for live-action three-dimensional model

Country Status (1)

Country Link
CN (1) CN115661398A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109522A (en) * 2023-04-10 2023-05-12 北京飞渡科技股份有限公司 Contour correction method, device, medium and equipment based on graph neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109522A (en) * 2023-04-10 2023-05-12 北京飞渡科技股份有限公司 Contour correction method, device, medium and equipment based on graph neural network

Similar Documents

Publication Publication Date Title
CN110570428B (en) Method and system for dividing building roof sheet from large-scale image dense matching point cloud
CN112595258B (en) Ground object contour extraction method based on ground laser point cloud
CN110717983B (en) Building elevation three-dimensional reconstruction method based on knapsack type three-dimensional laser point cloud data
CN112070769B (en) Layered point cloud segmentation method based on DBSCAN
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN114332366A (en) Digital city single house point cloud facade 3D feature extraction method
CN116310192A (en) Urban building three-dimensional model monomer reconstruction method based on point cloud
CN112070870B (en) Point cloud map evaluation method and device, computer equipment and storage medium
CN111260668A (en) Power line extraction method, system and terminal
CN112396641B (en) Point cloud global registration method based on congruent two-baseline matching
CN111652241B (en) Building contour extraction method integrating image features and densely matched point cloud features
CN114241217B (en) Trunk point cloud efficient extraction method based on cylindrical features
WO2011085435A1 (en) Classification process for an extracted object or terrain feature
CN114119902A (en) Building extraction method based on unmanned aerial vehicle inclined three-dimensional model
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN115661398A (en) Building extraction method, device and equipment for live-action three-dimensional model
Xiao et al. 3D urban object change detection from aerial and terrestrial point clouds: A review
Zhao et al. A 3D modeling method for buildings based on LiDAR point cloud and DLG
CN111861946B (en) Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method
WO2011085433A1 (en) Acceptation/rejection of a classification of an object or terrain feature
CN113345072A (en) Multi-view remote sensing topographic image point cloud reconstruction method and system
CN116071530B (en) Building roof voxelized segmentation method based on airborne laser point cloud
WO2011085434A1 (en) Extraction processes
CN116824379A (en) Laser point cloud building contour progressive optimization method based on multidimensional features
WO2011085437A1 (en) Extraction processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination