CN115471634A - Modeling method and device for urban green plant twins - Google Patents

Modeling method and device for urban green plant twins Download PDF

Info

Publication number
CN115471634A
CN115471634A CN202211330591.XA CN202211330591A CN115471634A CN 115471634 A CN115471634 A CN 115471634A CN 202211330591 A CN202211330591 A CN 202211330591A CN 115471634 A CN115471634 A CN 115471634A
Authority
CN
China
Prior art keywords
vegetation
vector
plant
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211330591.XA
Other languages
Chinese (zh)
Other versions
CN115471634B (en
Inventor
杨逸伦
杨健
关雨
黄金森
程方
池晶
付智能
张银松
凌家安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Geospace Information Technology Co ltd
Original Assignee
Geospace Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geospace Information Technology Co ltd filed Critical Geospace Information Technology Co ltd
Priority to CN202211330591.XA priority Critical patent/CN115471634B/en
Publication of CN115471634A publication Critical patent/CN115471634A/en
Application granted granted Critical
Publication of CN115471634B publication Critical patent/CN115471634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention is suitable for the technical field of GIS (geographic information system), and provides a modeling method and a device for urban green plant twins, wherein the method comprises the following steps: the method comprises the following steps of S1, according to an oblique photography model, calculating and extracting the vegetation urban green plant coverage range and attribute, and further generating urban green plant vector data which comprise a vegetation vector surface and single plant vector point data; and S2, according to the urban green vegetation vector data, performing elevation sampling on a vegetation vector plane and single plant vector point data in a three-dimensional engine, and generating a corresponding twin vegetation model according to final attributes. The method simplifies the production process of the urban green plant twin model, improves the production efficiency, reduces manual intervention, further realizes the correspondence of the urban green plant twin model and reality data, and meets the editable, manageable and available requirements of the urban green plant model.

Description

Modeling method and device for urban green plant twins
Technical Field
The invention belongs to the technical field of GIS (geographic information system), and particularly relates to a modeling method and device for urban green plant twins.
Background
In recent years, with the progress of digital twin technology, how to better build a city digital twin platform has become a hot topic in the smart city field. Currently, most of the urban physical form information, such as roads, rivers, buildings, etc., can be expressed by a 3D modeling method. However, green plants covered in cities are difficult to express in a modeling manner due to the characteristics of wide coverage area, multiple types, complex shapes and the like.
When a digital twin platform is built at present, three-dimensional reconstruction is carried out on an urban green plant coverage area in a three-dimensional engine, and most of the reconstruction is finished in a manual operation mode. First, the operator determines the urban green range against the urban ortho-image (DOM) or oblique photographic data (e.g., OSGB organization format). Then, the urban green planting coverage area is drawn based on a brush through the vegetation planting capability of a three-dimensional engine (such as UnrealEngine), and the three-dimensional engine generates urban green plants of corresponding types in batches. And finally, manually adjusting the position and the shape of the automatically generated urban green plant model, so that the appearance of the urban green plant model is basically consistent with that of a real scene. Because few green plant coverage data in the city are produced in the current data production process, the green plant coverage range of the existing city needs to be judged manually by an operator to a great extent according to DOM and OSGB models. Then, a large amount of investment of art personnel is needed when green plant twin models are produced in a three-dimensional engine, the vegetation models are brushed based on the determined range, and finally the types and the positions of the vegetation models are finely adjusted in a manual mode. On the one hand, this technique results in extremely low productivity for the operator and difficulty in satisfying the production of the green plant twin model in large cities; on the other hand, the technology has too many manual intervention links, the position precision of vegetation is difficult to guarantee, and the vegetation model brushed manually lacks the attribute information of singleization and independence, can not be effectively managed, can not be associated with the business data existing in the real scene, and is difficult to satisfy the smart city fine management.
Taking a certain section (43 square kilometers) of Jingdezhen city as an example, through practical verification, invested art workers draw and finely adjust urban green plants based on satellite images and inclined models, and finally invest 21 persons/day to form a green plant twin model of the section. It is obviously difficult to meet practical production needs for large urban scenarios involving thousands of square kilometers.
Therefore, the traditional urban green plant twin production modeling method needs to invest a large amount of art workers for editing, and is complex in process, time-consuming and labor-consuming. Meanwhile, the generation scheme excessively depends on the subjective feeling of the art personnel on the scene, so that the urban green planting effect of the whole scene is difficult to unify. More importantly, the urban green plant model produced in the way cannot correspond to urban green plants in the real world one by one due to various reasons such as position accuracy, lack of attribute information and the like, so that the urban green plant model cannot be practically managed and utilized in smart city projects.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a modeling method and apparatus for urban green plant twins, which aim to solve the technical problems of low modeling efficiency and incapability of fine management of urban green plants.
The invention adopts the following technical scheme:
the modeling method of the urban green plant twins comprises the following steps:
the method comprises the following steps of S1, calculating and extracting vegetation urban green plant coverage and attributes according to an oblique photography model, and further generating urban green plant vector data which comprise a vegetation vector plane and single-plant vector point data;
and S2, according to the urban green vegetation vector data, performing elevation sampling on a vegetation vector plane and single plant vector point data in a three-dimensional engine, and generating a corresponding twin vegetation model according to final attributes.
In another aspect, the modeling apparatus for urban green plant twinning comprises:
the data processing unit is used for extracting the vegetation urban green plant coverage range and the attribute through calculation according to the oblique photography model so as to generate urban green plant vector data, wherein the urban green plant vector data comprises a vegetation vector plane and single plant vector point data;
and the model generation unit is used for performing elevation sampling on the vegetation vector surface and the single plant vector point data in a three-dimensional engine according to the urban green vegetation vector data and generating a corresponding twin vegetation model according to the final attribute.
The invention has the beneficial effects that: according to the method, based on an urban OSGB inclined data model, the coverage range of an urban green plant twin model is automatically determined, the attribute information of green plants is extracted, the attributes such as the position, the type and the height of each green plant are determined through an algorithm, and the green plant twin model is automatically generated in a three-dimensional engine according to the data, so that the process is completely free of manual intervention, and the production efficiency of urban green plant twin is greatly improved; because the corresponding green plant twin model is directly generated based on data driving, automatic hanging of business data can be realized.
Drawings
FIG. 1 is a flow chart of a modeling method for urban green plant twins according to a first embodiment of the present invention;
FIG. 2 is a flowchart of step S1 provided by the first embodiment of the present invention;
fig. 3 is a schematic diagram of a deplabv 3+ neural network classifier provided in the first embodiment of the present invention;
FIG. 4 is a diagram illustrating a comparison between DOM images and urban green vegetation vector data obtained after vegetation data processing;
FIG. 5 is a flowchart of step S2 provided by the first embodiment of the present invention;
FIG. 6 is a schematic diagram of a comparison of the DOM image and the generated twin vegetation model;
fig. 7 is a block diagram of a modeling apparatus for urban green plant twins according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
as shown in fig. 1, the modeling method for urban green plant twins provided by this embodiment includes the following steps:
s1, according to the oblique photography model, calculating and extracting the coverage range and the attribute of vegetation urban green plants, and further generating urban green plant vector data which comprise a vegetation vector plane and single-plant vector point data.
In the prior art, there are various formats of oblique photography models. The present embodiment takes the OSGB tilt data model as an example. The image in the OSGB format is an oblique image, and is generally loaded in a larger three-dimensional model by using an OSGB file; the OSGB is an international universal three-dimensional scene format and is stored according to blocks, but indexes are not available, so that only one block can be displayed at a time; the oblique photography data only supports the OSGB organization mode of smart3d format.
The step extracts the coverage range and some attributes of the urban green plants, such as the positions, the types, the heights and the like of the plants, from the oblique photography model to generate urban green plant vector data. As shown in fig. 2, the specific process of this step is as follows:
s11, extracting image data and model data in the oblique photography model, converting the image data into a digital orthoimage, and converting the model data into a digital surface model.
The Digital ortho image DOM (Digital ortho Map) is a planar graph with kilometer grids, outline (inside and outside) decorations and notes, which is generated by performing radiation correction, differential correction and mosaic on scanned Digital aerial images or remote sensing images (monochrome or color) by pixel by using a DEM Digital elevation model and cutting according to a specified graph range.
The Digital Surface Model DSM (Digital Surface Model) is a ground elevation Model including the heights of Surface buildings, bridges, trees, and the like. Compared with DEM, the DEM only contains the elevation information of terrain and does not contain other surface information, and the DSM further contains the elevation of other surface information except the ground on the basis of the DEM. In some fields with requirements on the height of buildings, great attention is paid.
In the step, the DOM image and the DSM model of the urban area to be modeled are obtained through the existing inclined data model.
And S12, classifying the digital ortho-image through a neural network classifier, obtaining the pixels determined to be covered by planting and the vegetation type, and obtaining the grid classified image covered by planting.
Various neural network models, such as a DeeplabV3+ neural network classifier, may be employed in this step. As shown in connection with fig. 3, the classifier uses an encoding-decoding structure, wherein the encoding structure comprises a feature-extracted backbone network and a spatial pooling network. The main network uses Xceptance main network, which contains front convolution unit, middle convolution unit and output convolution unit, wherein the front convolution unit is several convolution layers connected in series, first two convolution 3X 3, then three residual modules of depth separable convolution to replace common convolution 3X 3. The intermediate convolution element is a plurality of convolutions in series, here depth separable convolutions. And finally, an output unit which comprises a residual error module and three depth separable convolutions. The spatial pooling network processes the output of the backbone network with a 1 x 1 convolution and 3 x 3 void convolutions and a spatial pyramid pooling, and then concatenates the results and reduces the channels with a 1 x 1 convolution. The decoding structure converts the middle output of each unit of the backbone network and the output of the cavity convolution into the same shape, then connects the same together, carries out convolution with 3 x 3, and then carries out up-sampling on the output result to obtain a classification result. In the step, the network is used for classifying the pixels of the DOM image to obtain the pixels and the types determined to be covered by plants, for example, the pixels of the current vegetation coverage area are the plane tree, and finally the grid classified image covered by plants is obtained.
And S13, vectorizing the generated grid classified image to obtain vector data covered by vegetation, namely a vegetation vector plane.
And S14, extracting the earth surface elevation according to the digital surface model to obtain a digital terrain model, extracting the height information of the vegetation vector plane through the difference value of the digital terrain model and the digital surface model, and keeping the maximum height and the average height of the vegetation for the vegetation vector plane.
And according to the DSM model obtained in the step S11, extracting the earth surface elevation of the DSM model to obtain a digital terrain model DTM, wherein the DTM model is used for drawing contour lines, slope drawings and perspective views in surveying and mapping, and manufacturing orthophotographs and correcting and measuring maps. Can be used as auxiliary data for classification in remote sensing application.
And the height information of the vegetation vector plane can be extracted by calculating the difference value between the DSM model and the DTM model. Only the maximum height and the average height of the vegetation on the vegetation vector plane are reserved in the step and are used for determining the scaling of the vegetation model subsequently.
S15, screening vegetation vector surfaces, and directly converting the vegetation vector surfaces smaller than an area threshold into single plant vector point data.
The small-area vegetation vector surface can be directly converted into single plant vector point data, and the subsequent processing efficiency of vegetation vector surface data can be improved. Therefore, in the step, the vegetation vector surface is screened by setting the area threshold, and the vegetation vector surface smaller than the area threshold is converted into single plant vector point data.
And S16, for the vegetation vector surface which is larger than or equal to the area threshold, intersecting the vegetation vector surface with urban road data, wherein the intersecting part is a covered road, uniformly sampling points of the covered road to obtain street tree point data, merging the street tree point data into single plant vector point data, and meanwhile, reserving the part, which is not intersected with the urban road data, of the vegetation vector surface as a final vegetation vector surface.
In practice, some vegetation is located on the road, and for the situation, the vegetation vector plane is intersected with the urban road data to obtain the covered road. And at the moment, uniform point sampling is directly carried out on the vegetation covered road to obtain street tree point data, and the street tree point data is merged into the single plant vector point data. And the remaining vegetation vector surface is not a vegetation area on the road with large area.
FIG. 4 is a comparison graph of DOM images of the vicinity of the people square of the Jingdezhen and urban green vegetation vector data obtained after a vegetation data processing process. It can be seen that the vegetation in the city is extracted more completely and converted into vector data.
And S2, according to the urban green vegetation vector data, performing elevation sampling on a vegetation vector plane and single plant vector point data in a three-dimensional engine, and generating a corresponding twin vegetation model according to final attributes.
The construction process of the urban green plant twin model is carried out in a three-dimensional engine. Based on the vegetation vector surface and plant vector point data obtained by the vegetation data processing in the last step, sampling is carried out on the surface data according to the vegetation type through a random point algorithm, the surface data are scattered and converted into plant point location data. And performing elevation sampling on the two-dimensional plant point data in a three-dimensional engine to obtain the elevation of the earth surface where the plant is located so as to convert the earth surface into three-dimensional space point data, and calculating the model scaling ratio according to the attribute of the vegetation. And finally, inquiring the corresponding monomer plant model according to the type of the plant, and generating the monomer plant model into a three-dimensional engine according to the position and the scaling of the monomer plant model to realize the generation of the green plant twin model. With reference to fig. 5, the specific process is as follows:
and S21, inputting the vegetation vector plane and the single plant vector point data into a three-dimensional engine, and converting the space reference of the vegetation vector plane and the single plant vector point into a coordinate system of the three-dimensional engine.
And after the vegetation vector plane and the single plant vector point data are input into the three-dimensional engine for coordinate conversion, the modeling is conveniently carried out in the three-dimensional engine.
And S22, converting the vegetation vector surface into random vector point data.
When the vegetation vector surface is processed, the vector surface is also converted into vector point data, so that the vector point data can be conveniently processed in a follow-up unified mode.
Since the coverage surface of the vegetation can be considered to be composed of random vegetation points, this step will consider using an algorithm to convert the vegetation vector surface data into random vector point data for a single plant. While it is believed that there is a minimum distance between the vegetation, which is proportional to the size of the individual plants. Based on the principle, scattered points distributed according to a certain rule are generated through an algorithm in the step, and the original vector surface data is replaced for planting of the vegetation model. The specific algorithm process is as follows:
1) Setting the plant minimum spacing R of the vegetation vector plane;
2) Obtaining the external rectangle of the vegetation vector surface, and converting the external rectangle into the vegetation vector surface
Figure 461972DEST_PATH_IMAGE001
The grid is a grid with the side length of the grid;
3) Creating a random point array S and a queue W to be processed, randomly taking a point in the grid as an origin, and adding the point into S and W;
4) When W is not empty, taking a head point P0 of the W, generating a random point P1 around the P0, and traversing points existing in 9 grids around the grid where the P1 is located, judging whether the distances between the point P1 and other points are all larger than R, if so, adding the P1 into S and the W, repeating the operation of the head point P0 of each team for n times, and removing the P0 from the W after the n times;
5) And finally, until W is empty, S is the random point array finally obtained.
And S23, combining the random vector point data and the single plant vector point to obtain two-dimensional plant point data.
And S24, performing depth rendering from the set height H in a three-dimensional engine in a overlooking angle to obtain a depth cache map of the terrain, sampling according to the positions of the two-dimensional plant vector points to obtain an observation depth d, and further obtaining the elevation H = H-d of the plant.
When rendering in the three-dimensional engine, the height H needs to be set first, and rendering is performed from the position in a top view angle. And sampling the positions of the two-dimensional plant vector points according to the vegetation types to obtain an observation depth d, wherein the final H = H-d is the elevation of the vegetation.
And S25, generating height information of each plant in a normal distribution mode according to the maximum height and the average height of the vegetation, and calculating the height ratio of the height information of each plant to the monomer plant model corresponding to the vegetation type to obtain the scaling S of the single plant.
Elevation refers to the absolute height of the plant at its location, and height refers to the height of the plant itself. The obtained maximum height and average height of various vegetations generate the height information of each plant in a normal distribution mode, so that the calculation of the height of each plant can be avoided, and the processing speed is improved. After the elevation, the height and the point position of the plant are obtained, the specific position of the plant in the space can be known.
S26, setting random rotation R for each plant.
In order to make the plants aesthetically pleasing, random rotations R about the Z-axis (i.e., the height direction) are provided for the plants. If no rotation is required, then R =1.
And S27, aiming at each plant, determining a unique space transformation array M = T R S according to the position T, the random rotation R and the scaling S of the plant, and generating a twin vegetation model in a three-dimensional engine according to the space transformation array and the corresponding single plant model.
Each vegetation has a corresponding monomer plant model, the position, the angle and the size of each plant can be obtained according to the space transformation array by the vegetation type, and finally, the twin vegetation model is generated according to the corresponding monomer plant model.
The final generated effect is shown in fig. 6, the left side of the image is an orthographic image, the right side of the image is a generated urban green plant twin model, and the image comparison shows that the method can well restore urban vegetation coverage.
The second embodiment:
another embodiment of the present invention provides a modeling apparatus for urban green plant twins, as shown in fig. 7, including:
the data processing unit 1 is used for extracting vegetation urban green plant coverage and attributes through calculation according to the oblique photography model, and further generating urban green plant vector data which comprise a vegetation vector plane and single plant vector point data;
and the model generation unit 2 is used for performing elevation sampling on the vegetation vector surface and the single plant vector point data in a three-dimensional engine according to the urban green vegetation vector data and generating a corresponding twin vegetation model according to the final attribute.
The two functional units correspondingly realize the steps S1 and S2 in the first embodiment, specifically, urban green vegetation vector data are generated through the data processing unit, and then a twin vegetation model is generated through the model generation unit for vegetation vector planes and single plant vector point data.
Wherein the data processing unit includes:
the data extraction module is used for extracting image data and model data in the oblique photography model, converting the image data into a digital orthoimage and converting the model data into a digital surface model;
the data classification module is used for classifying the digital orthographic images through a neural network classifier, acquiring pixels and vegetation types which are determined to be covered by planting, and obtaining a grid classification image covered by planting;
the vectorization module is used for carrying out vectorization on the generated grid classified image to obtain vector data covered by vegetation, namely a vegetation vector plane;
the height extraction module is used for extracting the earth surface elevation according to the digital surface model to obtain a digital terrain model, extracting the height information of a vegetation vector surface through the difference value of the digital terrain model and the digital surface model, and keeping the maximum height and the average height of vegetation for the vegetation vector surface;
the data conversion module is used for screening vegetation vector surfaces and directly converting the vegetation vector surfaces smaller than an area threshold into single-plant vector point data;
and the road correction unit is used for intersecting the vegetation vector surface with the urban road data for the vegetation vector surface larger than or equal to the area threshold, uniformly sampling points of the vegetation covered road to obtain the street tree point data, merging the street tree point data into the single plant vector point data, and simultaneously reserving the part of the vegetation vector surface, which is not intersected with the urban road data, as the final vegetation vector surface.
The model generation unit includes:
the data input module is used for inputting the data of the vegetation vector plane and the single plant vector point into the three-dimensional engine and converting the space reference of the vegetation vector plane and the single plant vector point into a coordinate system of the three-dimensional engine;
the plane-to-point module is used for converting the vegetation vector plane into random vector point data;
the data merging module is used for merging the random vector point data and the single plant vector point to obtain two-dimensional plant point data;
the elevation calculation module is used for performing depth rendering from a set height H in a three-dimensional engine in a overlooking angle to obtain a depth cache map of a terrain, sampling according to the positions of two-dimensional plant vector points to obtain an observation depth d, and further obtaining the elevation H = H-d of a plant;
the scaling calculation module is used for generating height information of each plant in a normal distribution mode according to the maximum height and the average height of the vegetation, calculating the height ratio of the height information of each plant to the single plant model and obtaining the scaling S of the single plant;
the rotation setting module is used for setting random rotation R for each plant;
and the model generation module is used for determining a unique space transformation array M = T R S according to the position T, the random rotation R and the scaling S of each plant, and generating a twin vegetation model in the three-dimensional engine according to the space transformation array and the corresponding monomer plant model.
The module structures of the two functional units also correspond to each other one by one to implement the specific implementation process in the first embodiment, which is not described herein again.
The modeling method and device for the urban green plant twins provided by the invention can greatly improve the production efficiency of the urban green plant twins model and reduce the art resource investment, and the authenticity of data is ensured from the data in the whole production process, so that the real business data can be connected, and the manageable and utilizable requirements are realized.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A modeling method for urban green plant twins is characterized by comprising the following steps:
the method comprises the following steps of S1, calculating and extracting vegetation urban green plant coverage and attributes according to an oblique photography model, and further generating urban green plant vector data which comprise a vegetation vector plane and single-plant vector point data;
and S2, according to the urban green vegetation vector data, performing elevation sampling on a vegetation vector plane and single plant vector point data in a three-dimensional engine, and generating a corresponding twin vegetation model according to final attributes.
2. The modeling method for urban green twinning as claimed in claim 1, wherein the specific process of step S1 is as follows:
extracting image data and model data in the oblique photography model, converting the image data into a digital orthoimage, and converting the model data into a digital surface model;
classifying the digital ortho-image through a neural network classifier to obtain pixels and vegetation types which are determined to be covered by plants, and obtaining a grid classified image covered by plants;
vectorizing the generated grid classified image to obtain vector data covered by vegetation, namely a vegetation vector plane;
extracting earth surface elevations according to the digital surface model to obtain a digital terrain model, extracting height information of a vegetation vector plane through a difference value of the digital terrain model and the digital surface model, and keeping the maximum height and the average height of vegetation for the vegetation vector plane;
screening vegetation vector surfaces, and directly converting the vegetation vector surfaces smaller than an area threshold into single plant vector point data;
and for the vegetation vector surface which is greater than or equal to the area threshold, intersecting the vegetation vector surface with the urban road data, wherein the intersecting part is a covered road, uniformly sampling points of the covered road to obtain street tree point data, merging the street tree point data into single plant vector point data, and meanwhile, reserving the part of the vegetation vector surface which is not intersected with the urban road data as a final vegetation vector surface.
3. The modeling method for urban green twins as claimed in claim 2, wherein the specific process of step S2 is as follows:
inputting the data of the vegetation vector plane and the single plant vector point into a three-dimensional engine, and converting the spatial reference of the vegetation vector plane and the single plant vector point into a coordinate system of the three-dimensional engine;
converting the vegetation vector plane into random vector point data;
combining the random vector point data and the single plant vector point to obtain two-dimensional plant point data;
performing depth rendering from a set height H in a three-dimensional engine in an overlooking angle to obtain a depth cache map of a terrain, sampling according to the positions of vector points of a two-dimensional plant to obtain an observation depth d, and further obtaining the elevation H = H-d of the plant;
generating height information of each plant in a normal distribution mode according to the maximum height and the average height of the vegetation, and calculating the height ratio of the height information of each plant to a monomer plant model corresponding to the vegetation type to obtain the scaling S of a single plant;
setting random rotation R for each plant;
and aiming at each plant, determining a unique spatial transformation matrix M = T R S according to the position T, the random rotation R and the scaling S of the plant, and generating a twin vegetation model in a three-dimensional engine according to the corresponding single plant model according to the spatial transformation matrix.
4. The modeling method for urban green plant twins as claimed in claim 3, wherein the vegetation vector plane is converted into random vector point data by the following specific process:
setting the plant minimum spacing R of the vegetation vector plane;
obtaining the external rectangle of the vegetation vector surface, and converting the external rectangle into the vegetation vector surface
Figure 704259DEST_PATH_IMAGE001
The grid is a grid with grid side length;
creating a random point array S and a queue W to be processed, randomly taking a point in the grid as an origin, and adding the point into S and W;
when W is not empty, taking a head point P0 of the W, generating a random point P1 around P0, and traversing points existing in 9 grids around the grid where P1 is located, judging whether the distances between the point P1 and other points are all larger than R, if so, adding P1 into S and W, repeating the operation for n times for the first point P0 of each team, and finally removing P0 from W;
and finally, until W is empty, S is the random point array finally obtained.
5. A modeling apparatus for urban green twinning, the modeling apparatus comprising:
the data processing unit is used for extracting the vegetation urban green plant coverage range and the attribute through calculation according to the oblique photography model so as to generate urban green plant vector data, wherein the urban green plant vector data comprises a vegetation vector plane and single plant vector point data;
and the model generation unit is used for performing elevation sampling on the vegetation vector plane and the single plant vector point data in a three-dimensional engine according to the urban green plant vector data and generating a corresponding twin vegetation model according to the final attribute.
6. The modeling apparatus of urban green twinning as claimed in claim 5, wherein said data processing unit comprises:
the data extraction module is used for extracting image data and model data in the oblique photography model, converting the image data into a digital orthoimage and converting the model data into a digital surface model;
the data classification module is used for classifying the digital orthographic images through a neural network classifier, acquiring pixels and vegetation types which are determined to be covered by planting, and obtaining a grid classification image covered by planting;
the vectorization module is used for carrying out vectorization on the generated grid classified image to obtain vector data covered by vegetation, namely a vegetation vector plane;
the height extraction module is used for extracting the earth surface elevation according to the digital surface model to obtain a digital terrain model, extracting the height information of a vegetation vector surface through the difference value of the digital terrain model and the digital surface model, and keeping the maximum height and the average height of vegetation for the vegetation vector surface;
the data conversion module is used for screening vegetation vector surfaces and directly converting the vegetation vector surfaces smaller than an area threshold into single-plant vector point data;
and the road correction unit is used for intersecting the vegetation vector surface with the urban road data for the vegetation vector surface larger than or equal to the area threshold, uniformly sampling points of the vegetation covered road to obtain the street tree point data, merging the street tree point data into the single plant vector point data, and simultaneously reserving the part of the vegetation vector surface, which is not intersected with the urban road data, as the final vegetation vector surface.
7. The modeling apparatus of urban green twinning as claimed in claim 5, wherein said model generating unit comprises:
the data input module is used for inputting the vegetation vector plane and the single plant vector point data into the three-dimensional engine and converting the space reference of the vegetation vector plane and the single plant vector point into a coordinate system of the three-dimensional engine;
the surface-to-point module is used for converting the vegetation vector surface into random vector point data;
the data merging module is used for merging the random vector point data and the single plant vector point to obtain two-dimensional plant point data;
the elevation calculation module is used for performing depth rendering from a set height H in a three-dimensional engine in a overlooking angle to obtain a depth cache map of a terrain, sampling according to the positions of two-dimensional plant vector points to obtain an observation depth d, and further obtaining the elevation H = H-d of a plant;
the scaling calculation module is used for generating height information of each plant in a normal distribution mode according to the maximum height and the average height of the vegetation, calculating the height ratio of the height information of each plant to the single plant model and obtaining the scaling S of the single plant;
the rotation setting module is used for setting random rotation R for each plant;
and the model generation module is used for determining a unique spatial transformation array M = T R S according to the position T, the random rotation R and the scaling S of each plant, and generating a twin vegetation model in the three-dimensional engine according to the corresponding monomer plant model according to the spatial transformation array.
CN202211330591.XA 2022-10-28 2022-10-28 Modeling method and device for urban green plant twins Active CN115471634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211330591.XA CN115471634B (en) 2022-10-28 2022-10-28 Modeling method and device for urban green plant twins

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211330591.XA CN115471634B (en) 2022-10-28 2022-10-28 Modeling method and device for urban green plant twins

Publications (2)

Publication Number Publication Date
CN115471634A true CN115471634A (en) 2022-12-13
CN115471634B CN115471634B (en) 2023-03-24

Family

ID=84336432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211330591.XA Active CN115471634B (en) 2022-10-28 2022-10-28 Modeling method and device for urban green plant twins

Country Status (1)

Country Link
CN (1) CN115471634B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310151A (en) * 2023-05-24 2023-06-23 山东捷瑞信息技术产业研究院有限公司 Vector model conversion method, system, device and medium based on digital twin
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093466A (en) * 2013-01-21 2013-05-08 武汉大学 Building three-dimensional change detection method based on LiDAR point cloud and image
CN104077806A (en) * 2014-07-10 2014-10-01 天津中科遥感信息技术有限公司 Automatic separate extraction method based on city building three-dimensional model
US20150269314A1 (en) * 2014-03-20 2015-09-24 Rudjer Boskovic Institute Method and apparatus for unsupervised segmentation of microscopic color image of unstained specimen and digital staining of segmented histological structures
CN107679229A (en) * 2017-10-20 2018-02-09 东南大学 The synthetical collection and analysis method of city three-dimensional building high-precision spatial big data
CN111047695A (en) * 2019-12-03 2020-04-21 中国科学院地理科学与资源研究所 Method for extracting height spatial information and contour line of urban group
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
WO2022213218A1 (en) * 2021-04-08 2022-10-13 The Governing Council Of The University Of Toronto System and method for vegetation detection from aerial photogrammetric multispectral data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093466A (en) * 2013-01-21 2013-05-08 武汉大学 Building three-dimensional change detection method based on LiDAR point cloud and image
US20150269314A1 (en) * 2014-03-20 2015-09-24 Rudjer Boskovic Institute Method and apparatus for unsupervised segmentation of microscopic color image of unstained specimen and digital staining of segmented histological structures
CN104077806A (en) * 2014-07-10 2014-10-01 天津中科遥感信息技术有限公司 Automatic separate extraction method based on city building three-dimensional model
CN107679229A (en) * 2017-10-20 2018-02-09 东南大学 The synthetical collection and analysis method of city three-dimensional building high-precision spatial big data
CN111047695A (en) * 2019-12-03 2020-04-21 中国科学院地理科学与资源研究所 Method for extracting height spatial information and contour line of urban group
WO2022213218A1 (en) * 2021-04-08 2022-10-13 The Governing Council Of The University Of Toronto System and method for vegetation detection from aerial photogrammetric multispectral data
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周帆扬: "三维虚拟VR技术在环艺设计中应用探析", 《南昌师范学院学报》 *
莫寅: "基于无人机倾斜摄影测量的大比例尺地形图测绘方法", 《北京测绘》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310151A (en) * 2023-05-24 2023-06-23 山东捷瑞信息技术产业研究院有限公司 Vector model conversion method, system, device and medium based on digital twin
CN116310151B (en) * 2023-05-24 2023-08-08 山东捷瑞信息技术产业研究院有限公司 Vector model conversion method, system, device and medium based on digital twin
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system
CN116342783B (en) * 2023-05-25 2023-08-08 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system

Also Published As

Publication number Publication date
CN115471634B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN115471634B (en) Modeling method and device for urban green plant twins
KR101165534B1 (en) Geospatial modeling system providing simulated tree trunks and branches for groups of tree crown vegetation points and related methods
CN105677890B (en) A kind of green amount numerical map production in city and display methods
CN112257597B (en) Semantic segmentation method for point cloud data
CN109883401B (en) Method and system for measuring visual field of city mountain watching
CN105118090B (en) A kind of point cloud filtering method of adaptive complicated landform structure
CN108921943B (en) Road three-dimensional model modeling method based on lane-level high-precision map
CN111784833A (en) WebGL-based flood evolution situation three-dimensional dynamic visualization display method
CN109671149A (en) Landform sketch map automatic drafting method based on DEM
CN105005580B (en) A kind of method for showing reservoir landform and device thereof
CN114049462B (en) Three-dimensional model monomer method and device
CN105354882A (en) Method for constructing big data architecture based three-dimensional panoramic display platform for large-spatial-range electricity transmission
CN115861527A (en) Method and device for constructing live-action three-dimensional model, electronic equipment and storage medium
CN116342783B (en) Live-action three-dimensional model data rendering optimization method and system
CN114926602B (en) Building singleization method and system based on three-dimensional point cloud
CN111754618A (en) Object-oriented live-action three-dimensional model multilevel interpretation method and system
CN114119884A (en) Building LOD1 model construction method based on high-score seven-satellite image
CN116402973A (en) Oblique photography model optimization method and system based on LOD reconstruction
CN116051758A (en) Height information-containing landform map construction method for outdoor robot
Andújar et al. Inexpensive reconstruction and rendering of realistic roadside landscapes
CN116645321B (en) Vegetation leaf inclination angle calculation statistical method and device, electronic equipment and storage medium
CN112687007A (en) LOD technology-based stereo grid map generation method
Xu et al. Methods for the construction of DEMs of artificial slopes considering morphological features and semantic information
CN114463494B (en) Automatic topographic feature line extraction method
CN113838199B (en) Three-dimensional terrain generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant