CN117036617B - Method, system and computer system for quickly constructing large-scene three-dimensional model - Google Patents

Method, system and computer system for quickly constructing large-scene three-dimensional model Download PDF

Info

Publication number
CN117036617B
CN117036617B CN202311074295.2A CN202311074295A CN117036617B CN 117036617 B CN117036617 B CN 117036617B CN 202311074295 A CN202311074295 A CN 202311074295A CN 117036617 B CN117036617 B CN 117036617B
Authority
CN
China
Prior art keywords
target
data
dimensional
dimensional data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311074295.2A
Other languages
Chinese (zh)
Other versions
CN117036617A (en
Inventor
袁杰祺
龙霞
张精平
李林
陈晓龙
陈媚特
程宇翔
王海松
黄震
吴凤敏
张智棚
李静泽
彭杨钊
蒋雪
殷明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Original Assignee
Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center) filed Critical Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Priority to CN202311074295.2A priority Critical patent/CN117036617B/en
Publication of CN117036617A publication Critical patent/CN117036617A/en
Application granted granted Critical
Publication of CN117036617B publication Critical patent/CN117036617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system and a computer system for quickly constructing a large-scene three-dimensional model, which comprise the following steps of S1, acquiring target three-dimensional data of a target three-dimensional scene, and storing the target three-dimensional data in a server; s2, determining a classification label of the target three-dimensional data based on the characteristic type of the three-dimensional data; s3, loading a white model corresponding to the classification label in the browser, wherein the white model is pre-established in a local memory; s4, acquiring geometric features and color information of the target three-dimensional data; s5, adjusting the white model based on the geometric features of the target three-dimensional data to enable the geometric features of the adjusted white model to be consistent with those of the target three-dimensional data; and S6, mapping the color information of the target three-dimensional data to the white model adjusted in the step S5 to obtain the target three-dimensional scene. According to the invention, the target three-dimensional data is required to be loaded from the server, so that the data volume required from the server is greatly reduced, and the loading speed is improved.

Description

Method, system and computer system for quickly constructing large-scene three-dimensional model
Technical Field
The invention relates to three-dimensional city-level three-dimensional modeling, in particular to a method, a system and a computer system for quickly constructing a large-scene three-dimensional model.
Background
The existing three-dimensional mapping data comprise a plurality of formats, wherein NURBS curved surfaces generated through laser point cloud construction and analysis are particularly suitable for creating complex curved surface modeling, and the requirements of a three-dimensional scene on a model are met.
The Point Cloud is also called as Point Cloud, and is obtained by acquiring the space coordinate of each sampling Point of the object surface under the same space reference system by using laser, and a series of massive Point sets expressing the target space distribution and the target surface characteristics are obtained, and the Point sets are called as Point Cloud. The attributes of the point cloud include: spatial resolution, point location accuracy, surface normal vector, etc.
With the continuous development of large-scale three-dimensional data acquisition technology in digital city construction, the three-dimensional laser scanning technology is widely applied. The method can complete three-dimensional coordinate acquisition of the target scene, automatically scan in a high precision in a three-dimensional space, truly describe the overall structure and morphological characteristics of the target scene and rapidly acquire the cloud data of the target scene.
However, if the existing three-dimensional model needs to be recorded and displayed in a local browser, a large amount of data needs to be acquired from a server for loading, so that the loading speed is very slow.
Disclosure of Invention
The invention aims to provide a method, a system and a computer system for quickly constructing a large-scene three-dimensional model, so as to solve the technical problem that the loading speed of existing three-dimensional data in a local browser is too slow.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a method for quickly constructing a large-scene three-dimensional model specifically comprises the following steps:
step S1, acquiring target three-dimensional data of a target three-dimensional scene, and storing the target three-dimensional data in a server;
s2, determining a classification label of the target three-dimensional data based on the characteristic type of the three-dimensional data;
step S3, loading a white model corresponding to the classification label in a browser, wherein the white model is pre-established in a local memory;
s4, acquiring geometric features and color information of the target three-dimensional data;
step S5, adjusting the white model based on the geometric features of the target three-dimensional data to enable the geometric features of the adjusted white model to be consistent with the geometric features of the target three-dimensional data;
and S6, mapping the color information of the target three-dimensional data to the white model adjusted in the step S5 to obtain a target three-dimensional scene.
Further, the step S2 is to determine a classification label of the target three-dimensional data based on the feature type of the three-dimensional data, and specifically comprises the following steps:
step S201: the target three-dimensional data comprise point cloud data, and the point cloud data of the server are subjected to clustering segmentation to obtain a plurality of point cloud clusters;
step S202: and identifying the shapes of the plurality of point cloud clusters based on a pre-established identification model to obtain the labels of the point cloud clusters.
Further, step S3, loading a white model corresponding to the classification label in the browser, specifically includes:
step S301: extracting a horizontal reference plane in the target three-dimensional data, and loading the reference plane into a local browser;
step S302: extracting the posture information of the target three-dimensional data;
step S303: determining the position of three-dimensional data with classification labels in the reference plane, and mapping a white model to a corresponding position in the reference plane based on the gesture information.
Further, the step S302: extracting pose information of the target three-dimensional data, including:
step S3021: mapping the target point cloud to a horizontal plane to obtain a plane graph;
step S3022: extracting edge data points of the plane image, and determining a first center point of the plane graph based on the edge data points;
step S3023: establishing a straight line based on the first center point, and rotating the straight line around the first center point to obtain a plurality of groups of intersection points of the straight line and the edge data points;
step S3024: and taking a line segment constructed by the group of intersection points with the farthest distance as a first long axis, and constructing the posture information of the target three-dimensional data based on the first center point and the first long axis.
Further, the step S303: mapping the white model to a corresponding position in the reference plane based on the gesture information specifically comprises:
step S3031: extracting a second long axis and a second center point of the bottom surface of the white model;
the step S3032: the white model is adjusted such that the second center point coincides with the first center point and the second long axis coincides with the first long axis.
Further, the step S4 of acquiring the geometric characteristics and the color information of the target three-dimensional data specifically comprises the following steps:
step S401, extracting key points of target point cloud data, and taking the key points of the target point cloud data as geometric features of the target three-dimensional data;
and step S402, extracting color information of target point cloud data, and taking the color information of the target point cloud data as the color information of the target three-dimensional data.
Further, step S401, extracting key points of the target point cloud data, includes:
step S4011, mapping the target point cloud data into a pre-established three-dimensional coordinate system to determine each data point A (x i ,y i ,z i ) Elevation data H (x) i ,y i );
Step S4012, scanning data points line by line along x-axis, and calculating elevation data difference ΔH of adjacent data points i The elevation data difference delta H i The method comprises the following steps:
ΔH i =H(x i ,y i )-H(x i-1 ,y i )
step S4013, determine data point A (x) i ,y i ,x i ) Whether it is an inflection point; if it isAt the time, the determination point a (x i ,y i ,z i ) The inflection point is taken as a key point;
and step S4014, when the distance between adjacent inflection points is larger than N data points, extracting a plurality of data points between the adjacent inflection points as key points, so that the distance between the adjacent key points does not exceed M data points, and the result is more accurate as the number of the data points is smaller.
Further, step S5, adjusting the white model based on the geometric features of the target three-dimensional data, comprises:
step S501, mapping a plurality of key points to corresponding positions above a reference surface;
and step S502, adjusting the surface of the white model based on a plurality of key points so that the key points are positioned on the surface of the white model, wherein the adjustment mode comprises stretching, compressing, cutting and filling.
The second purpose of the invention adopts the following technical scheme:
a system for quickly constructing a three-dimensional model of a large scene, comprising: a first information acquisition module, an information identification module, a local loading module, a second information acquisition module, an adjustment module and a mapping module,
the first information acquisition module is used for acquiring target three-dimensional data contained in a target three-dimensional scene, and the target three-dimensional data are stored in the server;
an information identification module for determining a classification tag of the target three-dimensional data based on a shape of the three-dimensional data;
the local loading module is used for loading a white model corresponding to the classification label in the browser, wherein the white model is pre-established in a local memory;
the second information acquisition module is used for acquiring geometric features and color information of the target three-dimensional data;
the adjusting module is used for adjusting the white model based on the geometric characteristics of the target three-dimensional data so that the geometric characteristics of the adjusted white model are consistent with those of the target three-dimensional data;
and the mapping module is used for mapping the color information of the target three-dimensional data to the adjusted white model to obtain a target three-dimensional scene.
The third purpose of the invention adopts the following technical scheme:
a computer system for quickly constructing a three-dimensional model of a large scene, comprising a computer readable storage medium executing a computer program of the method for quickly constructing a three-dimensional model of a large scene.
The beneficial effects of the invention are as follows:
according to the method, the classification label of the target three-dimensional data is determined by acquiring the target three-dimensional data contained in the target three-dimensional scene, and a white model corresponding to the classification label is loaded in a browser; obtaining geometric features and color information of target three-dimensional data; adjusting the white model based on the geometric features of the target three-dimensional data so that the geometric features of the adjusted white model are consistent with the geometric features of the target three-dimensional data; and mapping the color information of the target three-dimensional data to the adjusted white model to obtain the target three-dimensional scene.
According to the method, a general white model is built locally, the white model corresponding to the target three-dimensional data is loaded during loading, the shape characteristics of the target three-dimensional data are extracted to adjust the white model, and finally the color information of the target three-dimensional data is added to finish loading. According to the method and the device, the loading speed of the three-dimensional data is improved by locally loading the white model, and when the local browser loads the three-dimensional data, the locally pre-established white model is loaded in the browser. And then fusing the shape characteristics of the target three-dimensional data in the server with the white model so that the shape of the white model is consistent with the shape of the target three-dimensional data. The white model corresponds to the target three-dimensional data, so that the calculated amount in the fusion adjustment process is less, and finally, the color information is loaded. Therefore, the method simplifies the loading of the target three-dimensional data from the server, greatly reduces the data volume required from the server and effectively improves the loading speed.
Drawings
FIG. 1 is a schematic flow chart of the method in embodiment 1;
FIG. 2 is a flow chart of the fusion of shape features and color features of the three-dimensional data of a target in embodiment 1;
FIG. 3 is a schematic block diagram of the system of embodiment 2;
fig. 4 is a schematic structural diagram of a computer system in embodiment 3.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Referring to fig. 1 and 2, a method for quickly constructing a three-dimensional model of a large scene includes the following steps:
step S1: acquiring target three-dimensional data of a target three-dimensional scene, wherein the target three-dimensional scene comprises various three-dimensional data, such as: terrain models, building models, vegetation models, road models, river models, and the like. The target three-dimensional scene is a three-dimensional scene which needs to be built in a local browser, the target three-dimensional data is stored in a server, and the three-dimensional data in the server comprises point cloud data.
Step S2, determining a classification label of the target three-dimensional data based on the characteristic type of the three-dimensional data;
the method and the device identify the part of three-dimensional data with obvious characteristics in the target three-dimensional data through the identification model, and obtain the corresponding classification label. Parts of the three-dimensional data with more obvious characteristics, such as: labels are added to building models, vegetation models, and the like.
The specific method for classifying the labels by the three-dimensional data comprises the following steps:
step S201; clustering and dividing the point cloud data of the server to obtain a plurality of point cloud clusters;
step S202; and identifying the shapes of the plurality of point cloud clusters based on a pre-established identification model to obtain the labels of the point cloud clusters.
The method for establishing the identification model comprises the following steps:
step S2021, acquiring a plurality of sample data, wherein the sample data is a point cloud cluster for removing color information;
step S2022, adding a classification label to each sample data to obtain a training data set;
and step S2023, training the artificial neural network based on the training data set to obtain an identification model.
According to the invention, the recognition model is trained in a mode of acquiring sample data in advance, so that the recognition model can effectively recognize the point cloud cluster.
In this embodiment, a point cloud clustering algorithm is used to segment the point cloud data, for example, a DBSCAN algorithm is used to separate building models, vegetation models, and the like in the point cloud data. And then, identifying through the identification model, so as to determine the type of partial data with obvious characteristics in the target three-dimensional data.
Step S3, loading a white model corresponding to the classification label in a browser, wherein the white model is pre-established in a local memory, and the white model is a general model of a building or a topography, such as: house white film, tree white film, bridge white film.
Loading a white model corresponding to the classification label in a browser, wherein the method specifically comprises the following steps:
step S301: extracting a horizontal reference plane in the target three-dimensional data, and loading the reference plane into a local browser;
step S302: extracting the posture information of the target three-dimensional data;
the method specifically comprises the following steps:
step S3021: mapping the target point cloud to a horizontal plane to obtain a plane graph;
step S3022: extracting edge data points of the plane image, and determining a first center point of the plane graph based on the edge data points; wherein the first center point C 1 Is (x) 1 ,y 1 ),
Wherein x is i Is the abscissa of the edge data point, y i Is the ordinate of the edge data point;
step S3023: establishing a straight line based on the first center point, and rotating the straight line around the first center point to obtain a plurality of groups of intersection points of the straight line and the edge data points;
and taking a line segment constructed by the group of intersection points with the farthest distance as a first long axis, and constructing the posture information of the target three-dimensional data based on the first center point and the first long axis.
Step S303: determining the position of three-dimensional data with classification labels in the reference plane, and mapping a white model to a corresponding position in the reference plane based on the gesture information.
Mapping the white model to a corresponding position in the reference plane based on the gesture information specifically comprises:
step S3031: extracting a second long axis and a second center point of the bottom surface of the white model;
step S3032: the white model is adjusted such that the second center point coincides with the first center point and the second long axis coincides with the first long axis.
The projection center point and the long axis of the white model and the target point cloud are acquired, and when the white model is loaded, the center point and the long axis are overlapped, so that the white model is loaded to a position corresponding to the target point cloud and is consistent with the posture of the target three-dimensional data.
In this embodiment, first, a horizontal reference plane is extracted from the target three-dimensional data, and there is a positional correspondence between the horizontal reference plane and the target three-dimensional data. The horizontal reference plane may be a reference plane in three-dimensional data, or may be a plane of a certain three-dimensional data. In order to make the pose and position of the locally loaded white model consistent with the target three-dimensional data. The invention takes a horizontal reference plane as a reference to extract the position and posture information of the target reference plane. The white model is then loaded into the local browser based on the position and pose information.
S4, obtaining geometric features and color information of the target three-dimensional data;
the method specifically comprises the following steps:
step S401: extracting key points of target point cloud data, and taking the key points of the target point cloud data as geometric features of the target three-dimensional data;
extracting key points of target point cloud data specifically comprises the following steps:
step S4011: mapping the target point cloud data to a pre-built oneIn a vertical three-dimensional coordinate system to determine each data point a (x i ,y i ,z i ) Elevation data H (x) i ,y i );
Step S4012: scanning the data points line by line along the x-axis, and calculating the elevation data difference delta H of adjacent data points i The elevation data difference delta H i The method comprises the following steps:
ΔH i =H(x i ,y i )-H(x i-1 ,y i )
step S4013: determining data point a (x i ,y i ,z i ) Whether it is an inflection point; if it isAt the time, the determination point a (x i ,y i ,z i ) The inflection point is taken as a key point;
step S4014: judging the distance between adjacent inflection points, and when the distance between the adjacent inflection points is larger than N data points, extracting a plurality of data points between the adjacent inflection points as key points so that the distance between the adjacent key points does not exceed M data points, wherein the number of N and M points is defined by a user, the number of the data points is conventionally 8-12, the smaller the number of defined data points is, the more accurate the final calculation result is, the larger the calculation amount is, and the calculation pressure on a system is increased; the greater the number of defined data points, the greater the distance between adjacent inflection points, which can result in distortion of the model data.
The method collects elevation data of a plurality of data points and judges whether the data points are inflection points or not based on the elevation data. The inflection point is then taken as the key point. The inflection point is used as a key point, so that the outline characteristics of the target three-dimensional data can be extracted, and the method is used for verifying and adjusting the shape characteristics of the white model, so that the shape characteristics of the white model are more similar to the original three-dimensional data. In addition, the present application also extracts a large number of intermediate data points as key points in order to preserve more shape information.
Step S402: and extracting color information of target point cloud data, and taking the color information of the target point cloud data as the color information of the target three-dimensional data.
In this embodiment, the lightweight loading is performed by extracting the key point-to-point cloud data, and meanwhile, some shape information for verifying and adjusting the white model is also reserved.
Step S5: adjusting the white model based on the geometric features of the target three-dimensional data so that the geometric features of the adjusted white model are consistent with the geometric features of the target three-dimensional data;
adjusting the white model based on the geometric characteristics of the target three-dimensional data specifically comprises the following steps:
step S501: mapping the plurality of key points to corresponding positions above the reference surface;
step S502: and adjusting the surface of the white model based on a plurality of key points so that the key points are all positioned on the surface of the white model, wherein the adjustment mode comprises stretching, compressing, cutting and filling.
Wherein, since some key points are extracted in the foregoing, these key points describe the external contour information of the three-dimensional model. The white model itself is a model built in advance close to the original three-dimensional model. Therefore, after the white model is adjusted in such ways as stretching, compressing, cutting and filling by using the key points, the shape characteristics of the white model can be more similar to those of the original model.
Step S6: and mapping the color information of the target three-dimensional data to the adjusted white model to obtain a target three-dimensional scene.
Wherein each data point of the target three-dimensional data has coordinates, and the color information can be mapped into the white model by mapping the color information to a corresponding position above the horizontal reference plane based on the coordinates of the data point. In addition, for some original three-dimensional data loaded from the server, color information can be obtained in a mapping mode, so that a final wanted target three-dimensional scene is obtained.
The invention pre-establishes a universal white film locally, such as: house whitefilms, tree whitefilms, bridge whitefilms, etc., each of which has a specific type of tag locally thereon. And determining the classification label of the target three-dimensional data by acquiring the target three-dimensional data contained in the target three-dimensional scene. The method comprises the steps that a white model corresponding to a classification label is loaded in a browser, and when a target three-dimensional scene model is loaded, real model data is not directly requested to be loaded, and only the label and the space position coordinate of each model in the scene model are required to be obtained; and loading a local three-dimensional universal white membrane by loading a label of the model. Obtaining geometric features and color information of target three-dimensional data; adjusting the white model based on the geometric features of the target three-dimensional data so that the geometric features of the adjusted white model are consistent with the geometric features of the target three-dimensional data; and mapping the color information of the target three-dimensional data to the adjusted white model, and dynamically changing the color of the white film according to the model characteristics and the texture characteristics to obtain a target three-dimensional scene. According to the method, a general white model is built locally, the white model corresponding to the target three-dimensional data is loaded during loading, the shape characteristics of the target three-dimensional data are extracted to adjust the white model, and finally the color information of the target three-dimensional data is added to finish loading.
The method improves the loading speed of the three-dimensional data by locally loading the white model, and loads the locally pre-established white model in the browser when the local browser loads the three-dimensional data. And then fusing the shape characteristics of the target three-dimensional data in the server with the white model so that the shape of the white model is consistent with the shape of the target three-dimensional data. The white model corresponds to the target three-dimensional data, so that the calculated amount in the fusion adjustment process is less, and finally, the color information is loaded. Therefore, the method simplifies the loading of the target three-dimensional data from the server, greatly reduces the data volume required by the client from the server, and effectively improves the loading speed. According to the method and the device, the geometric shape and the color of the local white film are dynamically adjusted according to the server model label, so that the efficiency of local update or full update of the three-dimensional model is greatly improved. The system only needs to update the three-dimensional model label of the server, and the client-side white film data can be correspondingly adjusted.
Specific example 2:
as shown in fig. 3, a system for quickly constructing a three-dimensional model of a large scene includes: a first information acquisition module, an information identification module, a local loading module, a second information acquisition module, an adjustment module and a mapping module,
the first information acquisition module is configured to acquire target three-dimensional data included in a target three-dimensional scene, where the target three-dimensional scene includes multiple three-dimensional data, for example: terrain models, building models, vegetation models, road models, river models, and the like. The target three-dimensional scene is a three-dimensional scene which needs to be built in a local browser, the target three-dimensional data is stored in a server, and the three-dimensional data in the server comprises point cloud data.
An information identification module for determining a classification tag of the target three-dimensional data based on a shape of the three-dimensional data; the method and the device identify the part of three-dimensional data with obvious characteristics in the target three-dimensional data through the identification model, and obtain the corresponding classification label. Parts of the three-dimensional data with more obvious characteristics, such as: labels are added to building models, vegetation models, and the like.
The specific method for classifying the labels by the three-dimensional data comprises the following steps:
step S201; clustering and dividing the point cloud data of the server to obtain a plurality of point cloud clusters;
step S202; and identifying the shapes of the plurality of point cloud clusters based on a pre-established identification model to obtain the labels of the point cloud clusters.
The method for establishing the identification model comprises the following steps:
step S2021, acquiring a plurality of sample data, wherein the sample data is a point cloud cluster for removing color information;
step S2022, adding a classification label to each sample data to obtain a training data set;
and step S2023, training the artificial neural network based on the training data set to obtain an identification model.
According to the invention, the recognition model is trained in a mode of acquiring sample data in advance, so that the recognition model can effectively recognize the point cloud cluster.
In this embodiment, a point cloud clustering algorithm is used to segment the point cloud data, for example, a DBSCAN algorithm is used to separate building models, vegetation models, and the like in the point cloud data. And then, identifying through the identification model, so as to determine the type of partial data with obvious characteristics in the target three-dimensional data.
The local loading module is used for loading a white model corresponding to the classification label in the browser, wherein the white model is pre-established in a local memory and is a general model of a building or a topography; loading a white model corresponding to the classification label in a browser, wherein the method specifically comprises the following steps:
step S301: extracting a horizontal reference plane in the target three-dimensional data, and loading the reference plane into a local browser;
step S302: extracting the posture information of the target three-dimensional data;
the method specifically comprises the following steps:
step S3021: mapping the target point cloud to a horizontal plane to obtain a plane graph;
step S3022: extracting edge data points of the plane image, and determining a first center point of the plane graph based on the edge data points; wherein the first center point C 1 Is (x) 1 ,y 1 ),
Wherein x is i Is the abscissa of the edge data point, y i Is the ordinate of the edge data point;
step S3023: establishing a straight line based on the first center point, and rotating the straight line around the first center point to obtain a plurality of groups of intersection points of the straight line and the edge data points;
and taking a line segment constructed by the group of intersection points with the farthest distance as a first long axis, and constructing the posture information of the target three-dimensional data based on the first center point and the first long axis.
Step S303: determining the position of three-dimensional data with classification labels in the reference plane, and mapping a white model to a corresponding position in the reference plane based on the gesture information.
Mapping the white model to a corresponding position in the reference plane based on the gesture information specifically comprises:
step S3031: extracting a second long axis and a second center point of the bottom surface of the white model;
step S3032: the white model is adjusted such that the second center point coincides with the first center point and the second long axis coincides with the first long axis.
The projection center point and the long axis of the white model and the target point cloud are acquired, and when the white model is loaded, the center point and the long axis are overlapped, so that the white model is loaded to a position corresponding to the target point cloud and is consistent with the posture of the target three-dimensional data.
The second information acquisition module is used for acquiring geometric features and color information of the target three-dimensional data; step S401: extracting key points of target point cloud data, and taking the key points of the target point cloud data as geometric features of the target three-dimensional data;
extracting key points of target point cloud data specifically comprises the following steps:
step S4011: mapping the target point cloud data into a pre-established three-dimensional coordinate system to determine each data point a (x i ,y i ,z i ) Elevation data H (x) i ,y i );
Step S4012: scanning the data points line by line along the x-axis, and calculating the elevation data difference delta H of adjacent data points i The elevation data difference delta H i The method comprises the following steps:
ΔH i =H(x i ,y i )-H(x i-1 ,y i )
step S4013: determining data point a (x i ,y i ,z i ) Whether it is an inflection point; if it isAt the time, the determination point a (x i ,y i ,z i ) The inflection point is taken as a key point;
step S4014: and judging the distance between adjacent inflection points, and extracting a plurality of data points between the adjacent inflection points as key points when the distance between the adjacent inflection points is larger than N data points, so that the distance between the adjacent key points is not larger than M data points.
The method collects elevation data of a plurality of data points and judges whether the data points are inflection points or not based on the elevation data. The inflection point is then taken as the key point. The inflection point is used as a key point, so that the outline characteristics of the target three-dimensional data can be extracted, and the method is used for verifying and adjusting the shape characteristics of the white model, so that the shape characteristics of the white model are more similar to the original three-dimensional data. In addition, the present application also extracts a large number of intermediate data points as key points in order to preserve more shape information.
Step S402: and extracting color information of target point cloud data, and taking the color information of the target point cloud data as the color information of the target three-dimensional data.
The adjusting module is used for adjusting the white model based on the geometric characteristics of the target three-dimensional data so that the geometric characteristics of the adjusted white model are consistent with those of the target three-dimensional data; the method specifically comprises the following steps:
step S501: mapping the plurality of key points to corresponding positions above the reference surface;
step S502: and adjusting the surface of the white model based on a plurality of key points so that the key points are all positioned on the surface of the white model, wherein the adjustment mode comprises stretching, compressing, cutting and filling.
And the mapping module is used for mapping the color information of the target three-dimensional data to the adjusted white model to obtain a target three-dimensional scene. Each data point of the target three-dimensional data has coordinates, and the color information can be mapped into the white model by mapping the color information to a corresponding position above the horizontal reference plane based on the coordinates of the data point. In addition, for some original three-dimensional data loaded from the server, color information can be obtained in a mapping mode, so that a final wanted target three-dimensional scene is obtained.
Specific example 3:
as shown in fig. 4, a computer system for quickly constructing a three-dimensional model of a large scene, the computer system 400 includes a central processing unit (Central Processing Unit, CPU) 401 and a computer-readable storage medium including a Read-Only Memory (ROM) 402 and a random access Memory (Random Access Memory, RAM) 403, and various programs and data required for system operation are also stored in the RAM 403. The CPU 301, ROM 402, and RAM 403 are connected to each other by a bus 404. An Input/Output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In this embodiment: the computer-readable storage medium executes the computer program of the method for quickly constructing a three-dimensional model of a large scene in embodiment 1. The computer program can be downloaded and installed from a network through the communication portion 409 and/or installed from the removable medium 411. When executed by a Central Processing Unit (CPU) 401, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable storage medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the method of embodiment 1, so this embodiment is omitted here. The computer program product or computer program includes computer instructions stored in a computer-readable storage medium that may be included in the system description of embodiment 2 or that may exist alone and not be assembled into the system.
The technical scheme provided by the invention is described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (8)

1. The method for quickly constructing the large-scene three-dimensional model is characterized by comprising the following steps of:
step S1, acquiring target three-dimensional data of a target three-dimensional scene;
s2, determining a classification label of the target three-dimensional data based on the characteristic type of the three-dimensional data;
step S3, loading a white model corresponding to the classification label in a browser, wherein the white model is pre-established in a local memory;
comprising the following steps:
step S301: extracting a horizontal reference plane in the target three-dimensional data, and loading the reference plane into a local browser;
step S302: extracting the posture information of the target three-dimensional data; comprising the following steps:
step S3021: mapping the target point cloud to a horizontal plane to obtain a plane graph;
step S3022: extracting edge data points of the plane image, and determining a first center point of the plane graph based on the edge data points;
step S3023: establishing a straight line based on the first center point, and rotating the straight line around the first center point to obtain a plurality of groups of intersection points of the straight line and the edge data points;
step S3024: taking a line segment constructed by a group of intersection points with the farthest distance as a first long axis, and constructing the posture information of the target three-dimensional data based on the first center point and the first long axis;
step S303: determining the position of three-dimensional data with a classification label in the reference plane, and mapping a white model to a corresponding position in the reference plane based on the gesture information;
s4, acquiring geometric features and color information of the target three-dimensional data;
step S5, adjusting the white model based on the geometric features of the target three-dimensional data to enable the geometric features of the adjusted white model to be consistent with the geometric features of the target three-dimensional data;
and S6, mapping the color information of the target three-dimensional data to the white model adjusted in the step S5 to obtain a target three-dimensional scene.
2. The method for quickly constructing a three-dimensional model of a large scene according to claim 1, wherein the step S2 of determining the classification label of the target three-dimensional data based on the feature type of the three-dimensional data specifically comprises:
step S201: the target three-dimensional data comprise point cloud data, and the point cloud data of the server are subjected to clustering segmentation to obtain a plurality of point cloud clusters;
step S202: and identifying the shapes of the plurality of point cloud clusters based on a pre-established identification model to obtain the labels of the point cloud clusters.
3. The method for quickly constructing a three-dimensional model of a large scene according to claim 1, wherein said step S303: mapping the white model to a corresponding position in the reference plane based on the gesture information specifically comprises:
step S3031: extracting a second long axis and a second center point of the bottom surface of the white model;
step S3032: the white model is adjusted such that the second center point coincides with the first center point and the second long axis coincides with the first long axis.
4. The method for quickly constructing a three-dimensional model of a large scene according to claim 1, wherein the step S4 is to acquire geometric features and color information of the target three-dimensional data, and specifically comprises the following steps:
step S401, extracting key points of target point cloud data, and taking the key points of the target point cloud data as geometric features of the target three-dimensional data;
and step S402, extracting color information of target point cloud data, and taking the color information of the target point cloud data as the color information of the target three-dimensional data.
5. The method for quickly constructing a three-dimensional model of a large scene as defined in claim 4, wherein the step S401 of extracting key points of the cloud data of the target point comprises:
step S4011. The method comprises the steps ofThe cloud data of the target point is mapped into a pre-established three-dimensional coordinate system to determine each data point A (x i ,y i ,z i ) Elevation data H (x) i ,y i );
Step S4012, scanning data points line by line along x-axis, and calculating elevation data difference ΔH of adjacent data points i The elevation data difference delta H i The method comprises the following steps:
ΔH i =H(x i ,y i )-H(x i-1 ,y i )
step S4013, determine data point A (x) i ,y i ,z i ) Whether it is an inflection point; if it isAt the time, the determination point a (x i ,y i ,z i ) The inflection point is taken as a key point;
and step S4014, judging the distance between adjacent inflection points, and extracting a plurality of data points between the adjacent inflection points as key points when the distance between the adjacent inflection points is larger than N data points, so that the distance between the adjacent key points does not exceed M data points, and the result is more accurate as the number of the data points is smaller.
6. The method for quickly constructing a three-dimensional model of a large scene as recited in claim 1, wherein the step S5 of adjusting the white model based on the geometric features of the target three-dimensional data comprises:
step S501, mapping a plurality of key points to corresponding positions above a reference surface;
and step S502, adjusting the surface of the white model based on a plurality of key points so that the key points are positioned on the surface of the white model, wherein the adjustment mode comprises stretching, compressing, cutting and filling.
7. A three-dimensional model system for quickly constructing a large scene is characterized in that: comprising the following steps: a first information acquisition module, an information identification module, a local loading module, a second information acquisition module, an adjustment module and a mapping module,
the first information acquisition module is used for acquiring target three-dimensional data contained in a target three-dimensional scene;
an information identification module for determining a classification tag of the target three-dimensional data based on a shape of the three-dimensional data;
the local loading module is used for loading a white model corresponding to the classification label in the browser, wherein the white model is pre-established in a local memory and comprises the following components:
extracting a horizontal reference plane in the target three-dimensional data, and loading the reference plane into a local browser; extracting the posture information of the target three-dimensional data; determining the position of three-dimensional data with a classification label in the reference plane, and mapping a white model to a corresponding position in the reference plane based on the gesture information;
the extracting the gesture information of the target three-dimensional data comprises the following steps: mapping the target point cloud to a horizontal plane to obtain a plane graph; extracting edge data points of the plane image, and determining a first center point of the plane graph based on the edge data points; establishing a straight line based on the first center point, and rotating the straight line around the first center point to obtain a plurality of groups of intersection points of the straight line and the edge data points; taking a line segment constructed by a group of intersection points with the farthest distance as a first long axis, and constructing the posture information of the target three-dimensional data based on the first center point and the first long axis; the second information acquisition module is used for acquiring geometric features and color information of the target three-dimensional data;
the adjusting module is used for adjusting the white model based on the geometric characteristics of the target three-dimensional data so that the geometric characteristics of the adjusted white model are consistent with those of the target three-dimensional data;
and the mapping module is used for mapping the color information of the target three-dimensional data to the adjusted white model to obtain a target three-dimensional scene.
8. A computer system for quickly constructing a three-dimensional model of a large scene is characterized in that: a computer-readable storage medium comprising a computer program for executing the method for rapidly constructing a three-dimensional model of a large scene according to any one of claims 1 to 6.
CN202311074295.2A 2023-08-24 2023-08-24 Method, system and computer system for quickly constructing large-scene three-dimensional model Active CN117036617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311074295.2A CN117036617B (en) 2023-08-24 2023-08-24 Method, system and computer system for quickly constructing large-scene three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311074295.2A CN117036617B (en) 2023-08-24 2023-08-24 Method, system and computer system for quickly constructing large-scene three-dimensional model

Publications (2)

Publication Number Publication Date
CN117036617A CN117036617A (en) 2023-11-10
CN117036617B true CN117036617B (en) 2024-04-05

Family

ID=88622606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311074295.2A Active CN117036617B (en) 2023-08-24 2023-08-24 Method, system and computer system for quickly constructing large-scene three-dimensional model

Country Status (1)

Country Link
CN (1) CN117036617B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
US10621779B1 (en) * 2017-05-25 2020-04-14 Fastvdo Llc Artificial intelligence based generation and analysis of 3D models
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device
CN116612242A (en) * 2023-06-01 2023-08-18 辽宁工程技术大学 Urban road three-dimensional modeling method based on point cloud data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621779B1 (en) * 2017-05-25 2020-04-14 Fastvdo Llc Artificial intelligence based generation and analysis of 3D models
CN109993783A (en) * 2019-03-25 2019-07-09 北京航空航天大学 A kind of roof and side optimized reconstruction method towards complex three-dimensional building object point cloud
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device
CN116612242A (en) * 2023-06-01 2023-08-18 辽宁工程技术大学 Urban road three-dimensional modeling method based on point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李莹 ; 林宗坚 ; 苏国中 ; 杨应 ; .Smart 3D数据的三维模型重建.测绘科学.2017,(第09期),全文. *

Also Published As

Publication number Publication date
CN117036617A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
US20200160178A1 (en) Learning to generate synthetic datasets for traning neural networks
Osher et al. Geometric level set methods in imaging, vision, and graphics
US8384716B2 (en) Image processing method
CN107798725B (en) Android-based two-dimensional house type identification and three-dimensional presentation method
WO2020093950A1 (en) Three-dimensional object segmentation method and device and medium
US11983815B2 (en) Synthesizing high resolution 3D shapes from lower resolution representations for synthetic data generation systems and applications
CN112347550A (en) Coupling type indoor three-dimensional semantic graph building and modeling method
CN111460193B (en) Three-dimensional model classification method based on multi-mode information fusion
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
Wei et al. GeoDualCNN: Geometry-supporting dual convolutional neural network for noisy point clouds
CN117036617B (en) Method, system and computer system for quickly constructing large-scene three-dimensional model
CN115239892B (en) Method, device and equipment for constructing three-dimensional blood vessel model and storage medium
CN111275747A (en) Virtual assembly method, device, equipment and medium
US20220374556A1 (en) Parameterization of digital organic geometries
CN114926591A (en) Multi-branch deep learning 3D face reconstruction model training method, system and medium
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN113487741A (en) Dense three-dimensional map updating method and device
Zhang et al. Multi-Data UAV Images for Large Scale Reconstruction of Buildings
Mafipour et al. Heuristic optimization for digital twin modeling of existing bridges from point cloud data by parametric prototype models
CN117649530B (en) Point cloud feature extraction method, system and equipment based on semantic level topological structure
Ma et al. Research and application of personalized human body simplification and fusion method
Rojas et al. Quantitative Comparison of Hole Filling Methods for 3D Object Search.
Zhu et al. Robust quasi-uniform surface meshing of neuronal morphology using line skeleton-based progressive convolution approximation
CN113160186B (en) Lung lobe segmentation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant