CN115272591B - Geographic entity polymorphic expression method based on three-dimensional semantic model - Google Patents

Geographic entity polymorphic expression method based on three-dimensional semantic model Download PDF

Info

Publication number
CN115272591B
CN115272591B CN202210504279.1A CN202210504279A CN115272591B CN 115272591 B CN115272591 B CN 115272591B CN 202210504279 A CN202210504279 A CN 202210504279A CN 115272591 B CN115272591 B CN 115272591B
Authority
CN
China
Prior art keywords
image
building
primitive
point
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210504279.1A
Other languages
Chinese (zh)
Other versions
CN115272591A (en
Inventor
刘俊伟
邬丽娟
杨文雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terry Digital Technology Beijing Co ltd
Original Assignee
Terry Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terry Digital Technology Beijing Co ltd filed Critical Terry Digital Technology Beijing Co ltd
Priority to CN202210504279.1A priority Critical patent/CN115272591B/en
Publication of CN115272591A publication Critical patent/CN115272591A/en
Application granted granted Critical
Publication of CN115272591B publication Critical patent/CN115272591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Structural Engineering (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a geographic entity multi-form expression method based on a three-dimensional semantic model, which comprises the following steps: acquiring an oblique photography three-dimensional model based on rotation and depression angle adjustment, registering the three-dimensional model with a previous model, and realizing graphic primitive and semantic analysis; defining the primitives and semantic expressions of different hierarchical models through the OGCCityGML standard, and establishing each hierarchical primitive and semantic mapping table; based on the primitive and semantic mapping relation, reconstructing the model to realize derivation of different level models, wherein each object is associated with a LoD label, namely the same object can have model expressions corresponding to different lods, which are associated with the same entity, and selecting corresponding display modes under different detail levels to realize remarkable expression of the entity under the corresponding lods. The application realizes the derivation of a multi-level structure of a three-dimensional semantic model with accurate and efficient modeling, and improves the expression requirements of different levels of geographic entities.

Description

Geographic entity polymorphic expression method based on three-dimensional semantic model
Technical Field
The application relates to an expression method of a three-dimensional semantic model, in particular to a geographic entity multi-form expression method based on the three-dimensional semantic model, and belongs to the field of three-dimensional semantic modeling.
Background
Building three-dimensional modeling is one of key technologies for realizing smart cities, and has been widely applied to various fields such as city planning, disaster assessment, vehicle navigation, virtual tourism, cultural heritage protection and the like. Although a large number of building three-dimensional models and virtual display systems are established by domestic related institutions, modeling results are mainly oriented to visual display, semantic expression capacity is poor, utilization rate of a three-dimensional geographic information system (geogrSphic informStion system, GIS) is low, and deep applications such as building component information query, energy consumption analysis and fine management are difficult to meet. In addition, many three-dimensional data formats are incompatible with each other due to lack of uniform modeling standards, and have poor reusability, resulting in difficulty in interoperation of building three-dimensional models and data sharing. Therefore, how to enhance the semantic features of the three-dimensional model, reduce the manufacturing and maintenance cost of the three-dimensional model, and realize the sharing and interoperation of data of the three-dimensional model becomes a difficult problem to be solved in the current smart city construction.
The city geographic marking language (city geogrSphy mSrkup lSnguSge, cityGML) is a virtual three-dimensional city model storage and exchange international open standard which is introduced by the open geographic space information alliance (Open GeospStiSl Consortium, OGC), and is also a general semantic information model in the field of three-dimensional GIS. The CityGML model emphasizes consistency of geometric, topological and semantic expressions, overcomes the defects of the traditional three-dimensional model in terms of data sharing and interoperation, and enhances reusability of the three-dimensional model, so that the CityGML model has wide application in numerous fields such as urban planning, building illumination estimation, energy demand analysis, shadow analysis, noise propagation estimation, three-dimensional cadastral and facility management. In the aspect of three-dimensional expression of a building, cityGML not only defines semantic information of various parts such as a roof, a wall surface, a ground, a door, a window, a room and the like, but also adopts 5-level LoD to carry out multi-scale expression from simple to complex. The advent of CityGML has created a opportunity for the widespread use and sharing of three-dimensional geographic information.
However, the prior art is only a geometric representation of the whole of a specific building, and not a selective presentation of different primitives for each selectable road segment and different building, and thus subtle a possible combined representation of each primitive. On the other hand, stripping the primitives from the building drawing is often difficult to achieve in reality. Firstly, not all buildings have original software drawings, such as old buildings and old buildings before the popularization of CSD software, and some manually drawn drawings also have the risk of losing, especially old buildings before the middle of the last century, and secondly, the collection of the drawings also requires response coordination of society in all aspects, and is difficult to complete modeling in a short time. It is therefore desirable to re-model the geographical entity as efficiently as possible.
Disclosure of Invention
In order to solve the problems, the invention adopts the multi-level LoD label to display the graphic element in a multi-level manner. The invention provides a geographic entity polymorphic expression method based on a three-dimensional semantic model, which mainly comprises the following steps:
s1, establishing a geospatial rectangular coordinate system E, constructing a three-dimensional model based on a rotary oblique photographing device, performing 1:1 proportional registration with a previous model, collecting an internal image map of a building, and registering internal facilities of the internal image map with corresponding internal spaces of the building in the three-dimensional model to form a fifth primitive;
Wherein the interior facilities include stairways, rooms, office furniture, and furniture.
S2, extracting a part of the three-dimensional model, which coincides with a building roof in the earlier model, as a first primitive, taking the building part in the three-dimensional model, which remains after extraction, as a second primitive, and taking the projection of the first primitive and/or the building ground under E as a third primitive; extracting surface texture features of the first primitive and the second primitive in the three-dimensional model to serve as a fourth primitive; extracting building windows, doors and balconies in the three-dimensional model to form a sixth primitive;
the definition of the primitive range is performed through the early model, and each primitive is extracted from the three-dimensional model. It should be emphasized that due to the resolution difference, there may be cases where the point cloud image and the oblique photographic image may not completely cover each other in the same class of primitives, for which a complete three-dimensional model part should be extracted on the basis of the overlapping part to form a corresponding primitive.
S3, defining the primitives and semantic expressions of different level models through an OGClityGML standard, and establishing each level of primitives and semantic mapping tables;
s4, reconstructing the model based on the graphic element and the semantic mapping relation so as to realize derivation of different level models, wherein each graphic element is associated with a respective LoD label, namely, the same building object is provided with model expressions corresponding to different LoD labels and is associated with the same corresponding geographic entity, and a corresponding display mode is selected under different detail levels so as to realize remarkable expression of the entity under the corresponding LoD.
Regarding S1
The construction of the early model in S1 specifically comprises the following steps:
s1-1, recording an image map in a specified area by adopting unmanned aerial vehicle aerial photography, and simultaneously acquiring an aviation LIDAR point cloud map to obtain geographic entity data;
s1-2, registering an image map in the specified area of the unmanned aerial vehicle with a LIDAR point cloud map to obtain a registration image of the specified area; vertically translating the center point of the current building along the Z axis of the E coordinate system according to the elevation information of the top surface of the building in the point cloud picture to splice the elevation from the boundary cover M of each building in the image picture to the top surface of each building so as to complete the three-dimensional model of the current specified area;
it should be understood that: when the image map is imported into geographic image software, the image map is placed parallel to the XOY coordinate plane, so that elevation positioning of each physical point is realized through Z-axis translation according to elevation distribution.
S1-3 selects a plurality of other specified areas, and the steps S1-1-S1-2 are repeated to obtain more registration images of the selected specified areas, so that a three-dimensional model is built, and the early model is completed.
Wherein, the S1-1 specifically comprises the following steps:
p1, setting flight routes of the unmanned aerial vehicle in the specified area and the other specified areas and exposure time points of an aerial photographing device on the unmanned aerial vehicle;
The P2 take-off unmanned aerial vehicle flies according to the flying route, and meanwhile, image acquisition is carried out according to the exposure time point, so that a plurality of image graphs are obtained; synchronously acquiring an aviation LIDAR point cloud image;
preferably, the flight path is composed of a plurality of straight line segments, and in this case,
if the predetermined area is a rectangular area in P1, the exposure time point setting method is as follows: the method comprises the steps that an image acquisition range of an unmanned aerial vehicle on a flight route is set to be a rectangular area R, after a current image is acquired, when the unmanned aerial vehicle flies over the width distance of R in the flight direction of the area R, the unmanned aerial vehicle is selected to be the next image exposure time point, when the upper boundary of R is overlapped with the upper boundary of a specified area or the upper boundary of R exceeds the upper boundary of the specified area in the flight direction, the unmanned aerial vehicle turns, the distance of R is shifted left by one length, the image is continuously acquired by reverse flight, the selection of the exposure time point is consistent with that of forward flight, when the lower boundary of R is overlapped with the lower boundary of the specified area or the lower boundary of R exceeds the lower boundary of the specified area in the flight direction, the unmanned aerial vehicle turns again, and the distance of R by one length is shifted right, the image is continuously acquired by forward flight again, the selection mode of the exposure time point is unchanged, and the cycle is thus the full-domain image acquisition of the specified area can be completed.
If the specified area is a circular or oval area, firstly making a minimum external moment of the circular and oval, and setting exposure time points based on the minimum external moment in the same exposure time point setting mode as when the specified area is a rectangular area, and acquiring the same image graph acquisition mode;
if the specified area is the other shape area, firstly making the external moment of the circle and the ellipse, setting the exposure time point based on the external moment in the same exposure time point setting mode and acquiring the same image graph acquisition mode as when the specified area is the rectangular area, wherein the external moment is formed by leaning towards the other shape area by four sides of one rectangle at the moment, and stopping leaning towards when the four sides and the other shape area have tangent points or intersection points, and the rectangular is the external moment at the moment.
The method for synchronously collecting the aerial LIDAR point cloud images in P2 comprises the following steps: synchronously starting a point cloud image scanning program according to the exposure time point, and synchronously scanning the whole domain of the specified area at the initial time of flight;
the S1-2 specifically comprises the following steps:
p3, selecting each positioning point of an image map and a global scanning point cloud map;
And P4, splicing the acquired image maps according to the upper exposure time sequence of the flight route to obtain a spliced image map, and overlapping the synchronously acquired aviation LIDAR point cloud maps and the spliced image map according to the one-to-one correspondence of each positioning point so as to finish the registration.
Wherein, two positioning points are respectively set for the image map and the global scanning point cloud map in P3, and the coordinates of each positioning point under E are the same as the coordinates of one positioning point under E.
In P3, preferably, one positioning point in the image map coincides with a projection of a corresponding position point of the exposure time point of the unmanned aerial vehicle on the XOY plane of E, and the other one coincides with a projection of a vertex of R on the XOY plane of E.
In P4, preferably, before the acquired plurality of image maps are spliced according to the upper exposure time sequence of the flight path, the method further includes deleting the image portion exceeding the specified area.
The method for overlapping the aviation LIDAR point cloud image and the spliced image which are synchronously acquired according to the one-to-one correspondence of each positioning point specifically comprises the following steps:
overlapping two positioning points in the global scanning point cloud image with the positioning points with the same coordinates in the image to finish the registration;
The superposition operation is specifically to introduce a global scanning point cloud image into the spliced image which is spliced in the geographic image software, and to perform at least one operation of translation, rotation and scaling on positioning points with the same coordinates under the established E so as to realize the superposition.
The acquisition of the current building center point in S1-2 comprises the following steps:
p5, based on the registration image, extracting a series of characteristic diagrams obtained by different convolution layers by using a VGG-16 algorithm without an added layer as a CNN backbone network, wherein the characteristic diagrams are 1/2-1/10, preferably 1/8, of the size of the input image;
meanwhile, constructing a feature pyramid by using different layers of a CNN backbone network through an image pyramid algorithm FPN, predicting frames of a plurality of buildings,
p6, for each building in the plurality of buildings, obtaining a local feature map F of the building by utilizing a RoISlign algorithm for the feature map obtained by the series of different convolution layers and the frame of the corresponding building;
p7, adopting convolution layer processing to form a polygonal boundary cover M for the local feature map F of each building, and then utilizing the convolution layer processing to form P prediction vertexes of the boundary cover M;
and P8, selecting the point with the largest or smallest abscissa or ordinate among the P predicted vertexes as a first calibration point, if the same largest or smallest abscissa or ordinate exists, taking the point with the largest or smallest corresponding ordinate or abscissa as the first calibration point, carrying out distance calculation on the first calibration point and the rest P-1 points according to the path sequence of connecting the predicted points clockwise or anticlockwise, connecting the first calibration point with the point corresponding to the longest point, correspondingly selecting the other adjacent vertex with the shortest distance with the first calibration point predicted based on the boundary cover M as a second calibration point, connecting the point corresponding to the longest point with the boundary cover M in the same way, and obtaining the intersection point between the two connecting line segments as the current building center point of each building.
Preferably, the road center point is obtained by fitting the contour of the road in the specified area by the artificial neural network in the early model obtained in the step S1-3, and the method specifically comprises the following steps:
p9 calls the registration image to generate road continuous nodes through a node generator of an encoder and a decoder, and connects two nodes before and after generation in the generation process, inputs the new generation nodes into the node generator to continuously generate new nodes, and continues to connect the generated new nodes in straight line segments to form a road center line, and the road center line is connected into a road network in a circulating way;
p10, widening all straight line segments in the road network according to a preset width w to form a road wide line with a certain width, so as to obtain an urban road network model, wherein w is widened according to the corresponding road width in the registration image, and w is 0.5-0.8 times of an actual road width value corresponding to the straight line segment serving as the road segment of the road node in the registration image;
p11 selects a corresponding node as a corresponding marking node in the widened range of w in P10 for each node in P9, and defines the road center point.
The building of the three-dimensional model of the rotary oblique photographing device in S1 comprises dividing a modeled geographical area into a plurality of sub-areas, and carrying out at least one unmanned aerial vehicle scanning aerial photographing carrying the rotary oblique photographing device for each sub-area, wherein,
the rotary oblique photographing device comprises five oblique photographing cameras with adjustable depression angles in the upper, lower, left and right directions and a rotary platform for carrying all the cameras, and the rotary platform is assembled on the unmanned aerial vehicle.
The aerial photographing scanning method specifically comprises the following steps: taking the external moment of the field of view of the rotary oblique photographing device as a pixel element, and performing row scanning and/or column scanning on each sub-region to finish the aerial photographing scanning, wherein,
two groups of oblique photographic images are collected on each pixel element, the first group of collection is collected by adopting a mode that five cameras are all started, then the rotating platform is rotated clockwise or anticlockwise for 45 degrees to start four cameras except for the middle camera to collect the second group, after the second group of oblique photographic images are collected, the rotating platform reversely rotates to the position where each camera is located when the first group of oblique photographic images are collected, the next two groups of oblique photographic images are collected in the next direction, each camera is numbered, the collected oblique photographic images are associated with the collected cameras in number, and for each row or column of scanning, a coincident boundary exists between adjacent pixel elements, and a coincident pixel element boundary exists between adjacent rows or columns.
Thus, oblique photography in all directions is completed, and two oblique photography image images acquired before and after corresponding rotation are obtained for scanning in a pixel element area of each of the four cameras.
Preferably, before the first group of oblique photographic images is acquired and the rotating platform rotates and/or after the second group of oblique photographic images is acquired and the rotating platform reversely rotates, a third group of oblique photographic images is acquired, wherein the third group of oblique photographic images is acquired by adjusting the depression angle of at least one of the upper camera, the lower camera, the left camera and the right camera to one surface which is axisymmetric to the ground projection direction, and the depression angle of the at least one camera is restored to the depression angle state before adjustment after the acquisition is completed.
Regarding S3
The definition of primitives and semantic expressions of different hierarchical models by the OGCCityGML standard specifically includes: using CityGML to define 5-level LoDs to carry out multi-scale expression on the building, wherein LoD0 expresses the bottom or roof contour plane of the building, is a 2.5D polygon, and corresponds to a third primitive; the LoD1 simply represents a three-dimensional model of the outer wall of the building by using a block shape, and corresponds to a second graphic primitive; the LoD2 adds the description of the accessory structure and roof of the house and the exterior texture of the building on the basis of the LoD1, and corresponds to the set of the first primitive, the second primitive and the fourth primitive; the detail appearance structure of the building is described by the LoD3 on the basis of the LoD2, wherein the detail appearance structure comprises a door, a window and a balcony (comprising respective textures), and corresponds to a set of a first primitive, a second primitive, a fourth primitive and a sixth primitive; the LoD4 increases the expression of the internal facilities of the building and the bottom surface of the building on the basis of the LoD3, and displays the expression by hiding the sixth primitive and at least part of the first primitive and the second primitive;
Preferably, the representations of the different primitives are mapped by logical operations between the LoD tags. For example, loD2-LoD1 indicates that the roof and its attached structures and textures are displayed, the second primitive is hidden, loD3-LoD2 only indicates the door, window, balcony (including the respective textures), and LoD3-LoD1 indicates the remaining model that hides the second primitive. LoD4-LoD1 means hiding all exterior wall parts of a building.
Advantageous effects
The pixel element scanning oblique photographing technology based on rotation and changing depression angle is adopted to cover most photographing dead angles, so that modeling is more accurate and efficient,
the index of the primitive is carried out by adopting the LoD label, so that the expression requirements of different layers of the geographic entity are perfected.
Drawings
Figure 1 is a schematic diagram of a method for obtaining geographic entity data of various defined areas of city a according to embodiment 1 of the present invention,
figure 2 is a schematic diagram of the RNN recurrent neural network algorithm flow and urban road network generation process of the present invention,
figure 3a is a schematic diagram of the partial road network within the circle in figure 2 with the widening of the segment represented by node C circled in the road-i.e. the choice of the road-center point sideways,
figure 3b is an enlarged schematic view of a portion of the vicinity of the road node C circled in figure 3a,
figure 4 is a schematic diagram of the extraction of the multi-layer RNN building boundary cap M based on the convolutional long-short term memory ConvLSTM of the CNN backbone network and the vertex prediction points based on the building boundary cap M,
Figure 5 is a schematic view of the acquisition of the current building center point based on the building S1 and the concave building S1,
figure 6 is a schematic diagram of the structure of the rotary oblique photographing device,
fig. 7 is a view of a sub-region remote sensing image of a city, and a schematic representation of a pixel scan pattern therein,
fig. 8 is a schematic view of the formation of scanning pixel elements and a schematic view of the field of view transformation in the pixel elements, wherein 8a is the field of view distribution before transformation, and the schematic view of the formation of the pixel elements, 8b is the field of view variation after 45 ° rotation clockwise, 8c and 8d are schematic views of the field of view variation immediately before and after the depression angle adjustment performed after the acquisition of fig. 8a and 8b respectively,
FIG. 9 is a schematic representation of a multi-scale expression of a building using the CityGML definition 5-level LoD1-LoD4, wherein the lower left contains the expression schematic case of LoD3-LoD1,
FIG. 10 is a schematic diagram showing the logical operation between the LoD labels, mapped to the combined expression of different primitives, wherein 10a is LoD2-LoD1, 10b is LoD3-LoD2, 10c is LoD4-LoD1, 10d is LoD3- (LoD 2-LoD 1),
the reference numeral, 1 five camera oblique photographic arrangement, 1-1 includes five cameras in front, back, left, right, centre, 1-2 main box, 1-3 interface, 2 rotary platform, 2-1 rotary connecting rod.
Detailed Description
First embodiment, this embodiment will describe a way of establishing a pre-model and a three-dimensional model and matching the two: four embodiments are included to provide a complete description.
Example 1
The embodiment describes a method for acquiring a ground overlook image and synchronously acquiring an aviation LIDAR point cloud image by using unmanned aerial vehicle aerial photography or satellite remote sensing photography in S1.
As shown in fig. 1, the city a is divided into a plurality of predetermined areas, including a rectangular area with a color filled in the lower right corner, and circular, elliptical, pentagonal, and two rectangular predetermined areas, wherein the circular and elliptical areas respectively obtain the minimum external moment, the polygon is moved in parallel to the pentagon by four sides of a rectangle, and the approaching is stopped when the intersection point is detected, and an external moment is formed. The figure indicates the forward direction of the unmanned aerial vehicle's flight direction for all circumscribed moments of circles, ellipses, pentagons.
In this embodiment, taking the right lower corner rectangular area as the first predetermined area as an example, an enlarged view is formed below the first predetermined area, and R is the image acquisition range R of the rectangular unmanned aerial vehicle, and the rectangular area is composed of 48R, according to t in the enlarged view 0 、t 1 、t 11 、...、t 47 As an exposure time point sequence, from an initial t in the direction of the arrow 0 Starting the first exposure at a moment, there is a case where the lower and left boundaries of the R rectangle (shown as a green box slightly exceeding the first prescribed region for clarity of illustration) are exactly coincident with the lower and left boundaries of the first prescribed region, the boundary exceeding the first prescribed region (at least one of the lower and left boundaries) is within the range of the blue box outside the first prescribed region, and the flight proceeds through oneThe width distance of each R rectangle is t 1 At the moment, performing a second exposure until the flight reaches t 11 When reaching the border vicinity of the rectangular area, carrying out 12 th exposure, wherein the upper boundary and the left boundary (also slightly exceeding green boxes are indicated) of the R rectangle are overlapped with the upper boundary and the left boundary of the first specified area or the boundary (at least one of the upper boundary and the left boundary) exceeding the first specified area is within the blue frame range outside the first specified area, turning the unmanned aerial vehicle, moving leftwards (taking the flight front direction of the unmanned aerial vehicle as a reference) by an R rectangle length distance according to the arrow direction, continuing to reversely fly according to the arrow in the image, carrying out exposure acquisition image in the same exposure time point selection mode until the last reverse flight is finished to acquire t 47 And (5) exposing and acquiring 48 th image at the moment to finish the image acquisition of the first specified area.
The regional image map stimulation of the circles, the ellipses and the pentagons is finished based on the external moment and the same mode of the specified forward direction.
At the same time, LIDART is at initial t 0 Global scanning of the first specified area is carried out at any time to obtain global scanning point cloud pictures of the first specified area.
Example 2
The present embodiment describes a method for obtaining a road center point and a building center point, as shown in fig. 2, calling one ground top view image in embodiment 1, defining a step length l (selected from 1-5m according to the total length of the road) and a vector direction r as attribute vectors V based on the one ground top view image by using RNN cyclic neural network algorithm, and setting each start node and K incident road directionsThe points of (1) are used as input points (K initial attribute vectors are corresponding between K points and corresponding starting points), K+1 input points and attribute vectors V are input into an encoder, and a decoder generates new nodes; specifically the input point +/for each direction for each starting point>Corresponds to the coordinate under E +.>The attribute vector V corresponds to the coordinate increment +.>Where t represents the sequence number of the current input point (0 for the start point and 1 for the first new input point), inputting the coordinate and attribute vector V to the encoder, the decoder will emit the new node generated under E Wherein->A road network generation process of a total of 100 node generation cycles at every 20 node generation cycles is exemplarily shown in fig. 2, and straight line segments connect road nodes to form a road center line as shown in fig. 3 a;
fig. 3a is a schematic diagram of local road network widening within the circle in fig. 2. And (3) expanding the local road network of the figure 3a to two sides according to a preset width w to generate a road central line of the road network to form a road wide line with a certain width, so as to obtain an urban road network model, wherein w is 0.8 times of the road width defined by the actual road boundary in the ground overlooking image, and the expanded boundary is formed.
The aisle road node C is intersected with the boundary of the widened w and a perpendicular line perpendicular to the central line of any road on two sides of the corresponding road node shown in fig. 3b, one intersection is selected, and a point which is a preset distance away from the selected intersection and is located in the widened range w is selected on the perpendicular line as a plurality of road center points. The tangent point between the C as the center and the widened boundary is the radius of the C as the preset distance, so that the road center point in FIG. 3b is obtained.
The road is part of a traffic element. The traffic elements are expressed in LOD0 as a linear network, starting from LOD1, all of which are geometrically described by a three-dimensional surface. The traffic model of CityGML is provided by the topic extension module transport. The most important of these is the transport complex, which can express roads, tracks, railways, squares, etc. The transport complex is composed of two parts, trafficArea and augiliarytranstraffic area. Starting from LOD1, transport complex will provide a well-defined surface geometry to reflect the actual shape of the object, not just its centerline. In the LOD2 to LOD4 levels, it will be further thematically subdivided into TrafficArea for major traffic: such as automobiles, trains, public transportation, airplanes, bicycles or pedestrians, for AuxiliaryTrafficArea which is less important for transportation purposes: such as road signs, greenbelts or flower pots.
The traffic classification map is as follows:
for parts of the road, the present embodiment may build a multi-level expression with traffic classification of CityGML.
Then, the building of the city building network model of the artificial intelligent network and the acquisition of the city building center point are adopted. The method specifically comprises the following steps:
as shown in fig. 4, based on the step-called ground top view image, a VGG-16 algorithm without an added layer is used as a CNN backbone network to extract a series of feature maps obtained by different convolution layers, wherein the feature maps are 1/8 of the size of an input image;
meanwhile, constructing a feature pyramid by using different layers of a CNN backbone network through an image pyramid algorithm FPN, predicting frames of a plurality of buildings,
for each building in a plurality of buildings, obtaining a local feature map F of the building by utilizing a RoIAlign algorithm for the feature map obtained by the series of different convolution layers and the frame of the corresponding building;
for each building, the local feature map F is processed by a convolution layer to form a polygonal boundary cap M, and then the convolution layer is processed to form 5 predicted vertices a, b, c, D, D, D1 of the boundary cap M.
As shown in fig. 5, the X-axis direction of the coordinate system E is taken as a reference, a point D1 with the largest abscissa among the 5 predicted vertices is selected as a first calibration point in the building S1, the calibration point and the remaining 4 points are subjected to distance calculation according to a path sequence a, b, c, D, D2 for connecting the predicted points counterclockwise, and are connected with a point b with the longest distance, another adjacent vertex D2 predicted based on the boundary cover M and having the shortest distance from the first calibration point is correspondingly selected as a second calibration point, and is connected with a point a with the longest distance in the same manner, so that an intersection point X between two connecting line segments is obtained as a center point of the building S1.
For the building S2 with a concave roof, as an example, the predicted points are D3, e, f, g, h, i, j, D, 8 predicted points, as shown in fig. 5, the point D3 with the largest abscissa among the 8 predicted points is selected as the first predicted point, the path sequence i, h, e, f, g, j, D for connecting the predicted points with the rest 7 points counterclockwise is calculated, and the point f with the longest distance is connected, the adjacent vertex D4 predicted by the boundary cap M of the building based on the concave is correspondingly selected as the second predicted point, and the point e with the longest distance is connected in the same manner, so as to obtain the intersection point x' between the two connecting line segments as the current center point of the building S2.
Example 3
And then analyzing the image in the specified area of the unmanned aerial vehicle for registration with the LIDAR point cloud image and the software image, obtaining a registration image of the specified area, establishing a model three-dimensional monomer set and determining a specific implementation mode of the graphic element.
Still taking the first predetermined area of example 1 as an example, as shown in fig. 1, a space rectangular coordinate system E of the predetermined area of city a is established, and the X-axis and the Y-axis are respectively parallel to adjacent rectangular sides of the first predetermined area.
At t 0 The position of the unmanned plane at the exposure time of one image at the moment and the vertex of the right lower corner of the image are taken as positioning points, and two points with the same coordinates as the positioning points in the image under E in the global scanning point cloud image areThe universe scans anchor points in the point cloud.
The acquired 48 image maps are in accordance with the upper exposure time sequence t of the flight route 0 、t 1 、t 11 、...、t 47 And splicing to obtain a spliced image.
And importing the global scanning point cloud image into the spliced image which is spliced in the geographic image software, and carrying out translation, rotation and scaling on positioning points with the same coordinates of the spliced image and the global scanning point cloud image under the established E so as to realize registration of the spliced image and the global scanning point cloud image which are overlapped.
Example 4
This example matches the description three-dimensional model to the 1:1 of the earlier model. Firstly, vertically translating the current building center point in the embodiment 2 along the Z axis of an E coordinate system according to the elevation information of the building top surface in the point cloud picture to the elevation of the boundary cover M of each building in the spliced image picture to the building top surface so as to complete a three-dimensional model of a current specified area; the three-dimensional model was constructed in the same manner as in the case of the other regions other than the rectangular region in example 1. And meanwhile, collecting an internal image graph of the building, and registering the internal facilities of the internal image graph in a corresponding internal space of the building in the three-dimensional model to form a fifth primitive. And finally registering the E coordinate in the three-dimensional model and the E coordinate in the early model, and adjusting any two points in a preselected geographic entity. It may be two points D1 and D2 in embodiment 2, or two points D3 and D4, so that the distances between the two corresponding points in the three-dimensional model and the previous model are equal, i.e. 1:1 registration is completed.
Second embodiment
The present embodiment will describe a three-dimensional model construction mode of a rotary oblique photographing apparatus, as shown in fig. 6, which is a schematic structural diagram of the rotary oblique photographing apparatus, wherein the rotary oblique photographing apparatus 1 includes five cameras 1-1 including front, rear, left, right, and middle (not shown), a main box 1-2 including a tilt angle control mechanism, an image acquisition card, a signal and data wireless transmission device (not shown), and an interface 1-3 capable of connecting with a rotary link 2-1 of a rotary platform 2, and a top plate of the main box 1-2. The rotary platform 2 can be meshed with a motor output shaft gear mounted on the unmanned aerial vehicle through internal teeth (not shown) of the rotary platform so as to control the rotation and the reverse rotation movement of the five-camera oblique photographing device 1.
Fig. 7 is a remote sensing image diagram of a sub-area of a city, and for convenience of explanation of the scanning mode, the red frame in the upper right corner of fig. 7 is a scanning pixel element, and the scanning direction of an arrow in the diagram is used for proceeding to the next yellow pixel element, and the two pixels have a coincident edge. The line scanning is done in this way, while each of the pixel elements of the second line after the line feed coincides with one edge of the adjacent pixel element of the previous line.
Wherein each pixel element in fig. 7 is a red frame as shown in fig. 8a, wherein the field of view range of five cameras 1-1 is exemplarily shown. The red frame is an external square with front, back, left and right visual fields. After the acquisition of the first group of five images is completed at the position of fig. 8a, the motor mounted on the unmanned aerial vehicle rotates to enable the rotary platform 2 to rotate 45 degrees clockwise to the state of fig. 8b to acquire the second group of five images, then the rotary platform 2 reversely rotates anticlockwise and resumes the depression angle before adjustment to the state of fig. 8a, and the range of the yellow pixel element frame in fig. 7 is adopted for the acquisition of the next group of five images.
For the third group of five images, as shown in fig. 8c, after the first group of five images are acquired, before the rotating platform 2 is rotated, the signal and data wireless transmission device in the main box 1-2 receives the control signal of the unmanned aerial vehicle, and instructs the tilt angle changing control mechanism to adjust the tilt angles of the front, rear, left and right cameras 1-1 to acquire the images at one side axisymmetric to the ground projection direction. As shown in fig. 8d, after the second group of five images is acquired at the position shown in fig. 8b, and before the adjustment of the depression angle is reversed and the adjustment is resumed to the state shown in fig. 8a, the depression angles of the front, rear, left and right cameras 1-1 may be adjusted to acquire the images at the sides symmetrical to the respective ground projection directions. In fig. 8c and 8d, the broken line and the implementation indicate the state of change in the visual field range before and after adjustment, respectively.
And grouping and associating all the acquired image maps with the numbers of the cameras according to the acquired groups, numbering the pixel elements, and completing the mapping between each numbered camera and the corresponding shot image under each pixel element number. And finally, completing three-dimensional automatic modeling of the image by using the mapping.
In one preferred embodiment, the furthest view boundary coincides with the opposite view boundary of the intermediate camera after the depression angle has been adjusted. Thus, the overlapped image part is reduced, and the repeated calculation burden of three-dimensional reconstruction is reduced.
Third embodiment
This embodiment will explain the process of establishing the primitives and the semantic mapping table of each hierarchy. Firstly, extracting a part of the three-dimensional model, which coincides with a building roof in a previous model, as a first primitive, taking a building part in the three-dimensional model which remains after extraction as a second primitive, and taking the projection of the first primitive and/or the building ground under E as a third primitive; extracting surface texture features of the first primitive and the second primitive in the three-dimensional model to serve as a fourth primitive; and extracting the building window, door and balcony in the three-dimensional model to form a sixth primitive.
Using the CityGML to define 5-level LoDs to perform multi-scale expression on a building as shown in fig. 9, wherein LoD0 expresses the bottom or roof contour plane of the building as a 2.5D polygon, corresponding to a third primitive; the LoD1 simply represents a three-dimensional model of the outer wall of the building by using a block shape, and corresponds to a second graphic primitive; the LoD2 adds the description of the accessory structure and roof of the house and the exterior texture of the building on the basis of the LoD1, and corresponds to the set of the first primitive, the second primitive and the fourth primitive; loD3 adds a detailed look structure describing the building on the LoD2 basis, for example, loD3 shown in fig. 9 may include a door, a three-sided window (including respective textures), corresponding to a set of first, second, fourth, and sixth primitives; the expression of six people tables and four chairs in the internal facility of the building and the bottom surface (gray) of the building are added on the basis of the LoD3 by the LoD4, and optionally, the expression can be displayed by hiding the sixth primitive and at least part of the first primitive and the second primitive; the primitives and the semantic mapping tables of each level are shown in the following table:
The opening refers to an opening structure of a door, a window and a balcony, optionally, may also refer to a detailed exterior structure of a building and/or an unoccluded space formed by hiding at least part of the first graphic element and the second graphic element, wherein at least part of the interior facilities can be displayed, and the room is displayed through the unoccluded space.
The coordinates of the center point are used as the associated pointers of the LoDs for different buildings respectively, and a mapping from each building to the respective LoDs is formed.
In this embodiment, the coordinates of the center point may also be used as the association pointers of the LoDs for different road segments, so as to form a mapping from each road segment to the LoDs belonging to each road segment. Optionally, the embodiment combines the LoDs mapping of the road segment with the LoDs mapping of the building, that is, adds LoD5 as the mapping relation with the road segment on the basis of multi-scale expression of the original 5-level LoDs on the building, so that the road and the building realize significant expression under LoD.
As shown in fig. 10, the combination expression of different primitives is mapped through the logical operation between LoD labels. Fig. 10a shows LoD2-LoD1, i.e. the roof and its appurtenant structure and texture are shown, fig. 10b shows LoD3-LoD2 only the doors, windows, balconies (including the respective texture), fig. 9 a bottom left can be LoD3-LoD1 shows the remaining model hiding the second primitive, fig. 10c shows LoD4-LoD1 shows hiding all outer wall parts of the building, fig. 10d is LoD3- (LoD 2-LoD 1), hiding the roof and roof construction (roof side wall parts) and its texture.
And finally, reconstructing the model based on the graphic element and the semantic mapping relation table so as to realize derivation of different levels of models, wherein each graphic element is associated with a respective LoD label, namely, the same building object is provided with model expressions associated with different LoD labels and is associated with the same corresponding geographic entity, and a corresponding display mode is selected under different detail levels so as to realize remarkable expression of the entity under the corresponding LoD.
The embodiment of the invention also provides an unmanned aerial vehicle for realizing the geographic entity multi-form expression method based on the three-dimensional semantic model, which is characterized in that the unmanned aerial vehicle is provided with a rotary oblique photographing device, the rotary oblique photographing device comprises five oblique photographing cameras with adjustable depression angles in the upper, lower, left and right directions, a main box comprising a variable depression angle control mechanism, an image acquisition card, signal and data wireless transmission equipment, and a rotary platform provided with all the cameras, wherein the rotary platform is assembled on the unmanned aerial vehicle, a top plate of the main box is provided with an interface capable of being connected with a rotary connecting rod of the rotary platform, and the rotary platform is meshed with a motor output shaft gear provided on the unmanned aerial vehicle through internal teeth of the rotary platform so as to control the rotation and the reverse rotation of the rotary oblique photographing device.

Claims (7)

1. The geographic entity polymorphic expression method based on the three-dimensional semantic model mainly comprises the following steps:
s1, establishing a geospatial rectangular coordinate system E, constructing a three-dimensional model based on a rotary oblique photographing device, performing 1:1 proportional registration with a previous model, collecting an internal image map of a building, and registering internal facilities of the internal image map with corresponding internal spaces of the building in the three-dimensional model to form a fifth primitive; wherein the internal facilities include stairways, rooms, office furniture, and furniture;
s2, extracting a part of the three-dimensional model, which coincides with a building roof in the earlier model, as a first primitive, taking the building part in the three-dimensional model, which remains after extraction, as a second primitive, and taking the projection of the first primitive and/or the building ground under E as a third primitive; extracting surface texture features of the first primitive and the second primitive in the three-dimensional model to serve as a fourth primitive; extracting building windows, doors and balconies in the three-dimensional model to form a sixth primitive;
s3, defining the primitives and semantic expressions of different level models through an OGClityGML standard, and establishing each level of primitives and semantic mapping tables;
s4, reconstructing the model based on the mapping relation of the graphic elements and the semantics so as to realize the derivation of different levels of models, wherein each graphic element is associated with a respective LoD label, namely the same building object is provided with model expressions corresponding to different LoD labels and is associated with the same corresponding geographic entity, and a corresponding display mode is selected under different detail levels so as to realize the remarkable expression of the entity under the corresponding LoD;
The construction of the early model in S1 specifically comprises the following steps:
s1-1, recording an image map in a specified area by adopting unmanned aerial vehicle aerial photography, and simultaneously acquiring an aviation LIDAR point cloud map to obtain geographic entity data; the unmanned aerial vehicle is provided with a rotary oblique photographing device, the rotary oblique photographing device comprises five oblique photographing cameras with adjustable depression angles in the upper, lower, left and right directions, a main box comprising a depression angle changing control mechanism, an image acquisition card, signal and data wireless transmission equipment, and a rotary platform for carrying all the cameras; the rotating platform is assembled on the unmanned aerial vehicle, the main box top plate is provided with an interface capable of being connected with a rotating connecting rod of the rotating platform, and the rotating platform is meshed with a motor output shaft gear carried on the unmanned aerial vehicle through internal teeth of the rotating platform so as to control the rotation and the reverse rotation movement of the rotating oblique photographic device;
s1-2, registering an image map in the specified area of the unmanned aerial vehicle with a LIDAR point cloud map to obtain a registration image of the specified area; vertically translating the center point of the current building along the Z axis of the E coordinate system according to the elevation information of the top surface of the building in the point cloud picture to splice the elevation from the boundary cover M of each building in the image picture to the top surface of each building so as to complete the three-dimensional model of the current specified area;
S1-3, selecting a plurality of other specified areas, repeating the steps S1-1-S1-2 to obtain more registration images of the selected specified areas, and establishing a three-dimensional model to complete a pre-model;
the method comprises the steps that S1, a three-dimensional model of a rotary oblique photographing device is built, a modeled geographic area is divided into a plurality of subareas, at least one unmanned aerial vehicle with the rotary oblique photographing device mounted thereon is scanned for each subarea, the rotary oblique photographing device comprises five oblique photographing cameras with adjustable depression angles in the upper, lower, left and right directions, and a rotary platform with all the cameras mounted thereon, and the rotary platform is assembled on the unmanned aerial vehicle;
the aerial photographing scanning method specifically comprises the following steps: taking the external moment of the field of view of the rotary oblique photographing device as a pixel element, and performing row scanning and/or column scanning on each sub-region to finish the aerial photographing scanning, wherein,
acquiring two groups of oblique photographic images on each pixel element, wherein the first group of acquisition adopts a mode that five cameras are all started, then the rotating platform is rotated clockwise or anticlockwise for 45 degrees to start four cameras except for the middle camera to acquire the second group, after the second group of acquisition, the rotating platform is reversely rotated to the position of each camera when the first group of acquisition is completed, the next two groups of oblique photographic images are acquired in the next direction, each camera is numbered, the acquired oblique photographic images are associated with the acquired cameras in number, and for each row or column of scanning, a coincident boundary exists between adjacent pixel elements, and a coincident pixel element boundary exists between adjacent rows or columns;
Grouping and associating all acquired image graphs with the numbers of all cameras according to the acquired groups, numbering pixel elements, completing mapping between each numbered camera and the corresponding shot image under each pixel element number, and finally completing three-dimensional automatic modeling of the image by using the mapping;
before the first group of oblique photographic images are acquired and the rotating platform rotates and/or after the second group of oblique photographic images are acquired and the rotating platform reversely rotates, acquiring a third group of oblique photographic images, wherein the third group of oblique photographic images are acquired by adjusting the depression angle of at least one of the upper camera, the lower camera, the left camera and the right camera to be acquired at one surface which is axisymmetric to the ground projection direction, and restoring the depression angle of the at least one camera to the depression angle state before adjustment after the acquisition is completed; after the depression angle is adjusted, the furthest boundary of the field of view coincides with the opposite end boundary of the intermediate camera.
2. The method according to claim 1, wherein S1-1 comprises the steps of:
p1, setting flight routes of the unmanned aerial vehicle in the specified area and the other specified areas and exposure time points of an aerial photographing device on the unmanned aerial vehicle;
The P2 take-off unmanned aerial vehicle flies according to the flying route, and meanwhile, image acquisition is carried out according to the exposure time point, so that a plurality of image graphs are obtained; synchronously acquiring an aviation LIDAR point cloud image;
wherein the flight path is composed of a plurality of straight line segments, and at the moment,
if the predetermined area is a rectangular area in P1, the exposure time point setting method is as follows: setting an image acquisition range of the unmanned aerial vehicle on a flight route as a rectangular region R, selecting a next image exposure time point when the unmanned aerial vehicle flies over the width distance of the range R in the flight direction after the current image acquisition, turning the unmanned aerial vehicle when the upper boundary of the R overlaps with the upper boundary of the specified region or the upper boundary of the R exceeds the upper boundary of the specified region in the flight direction, shifting the distance of one length of the R left to reversely fly to continuously acquire the image, selecting the exposure time point consistent with the situation of forward flight, and turning the unmanned aerial vehicle again when the lower boundary of the R overlaps with the lower boundary of the specified region or the lower boundary of the R exceeds the lower boundary of the specified region in the flight direction, shifting the distance of one length of the R right to forward fly again to continuously acquire the image, wherein the selection mode of the exposure time point is unchanged, and thus recycling can finish the acquisition of the whole domain image of the specified region;
If the specified area is a circular or oval area, firstly making a minimum external moment of the circular and oval, and setting exposure time points based on the minimum external moment in the same exposure time point setting mode as when the specified area is a rectangular area, and acquiring the same image graph acquisition mode;
if the specified area is the other shape area, firstly making the external moment of the circle and the ellipse, setting the exposure time point based on the external moment in the same exposure time point setting mode and acquiring the same image graph acquisition mode as the case that the specified area is the rectangular area, wherein the external moment is formed by leaning towards the other shape area by four sides of one rectangle at the moment, and leaning towards the other shape area when the four sides and the other shape area have tangent points or intersection points, and the rectangular is the external moment at the moment;
the method for synchronously collecting the aerial LIDAR point cloud images in P2 comprises the following steps: synchronously starting a point cloud image scanning program according to the exposure time point, and synchronously scanning the whole domain of the specified area at the initial time of flight;
the S1-2 specifically comprises the following steps:
p3, selecting each positioning point of an image map and a global scanning point cloud map;
P4, splicing the acquired image maps according to the upper exposure time sequence of the flight route to obtain a spliced image map, and overlapping the synchronously acquired aviation LIDAR point cloud maps and the spliced image map according to the one-to-one correspondence of each positioning point so as to finish the registration;
wherein, two positioning points are respectively set for the image map and the global scanning point cloud map in P3, and the coordinates of each positioning point under E are the same as the coordinates of one positioning point under E.
3. The method according to claim 2, wherein one of the positioning points in the image map in P3 coincides with the projection of the corresponding exposure time point of the position point of the drone on the XOY plane of E, and the other is selected to coincide with the projection of one of the vertices of R on the XOY plane of E;
p4, deleting the image part exceeding a specified area before splicing the acquired image pictures according to the upper exposure time sequence of the flight route;
the method for overlapping the aviation LIDAR point cloud image and the spliced image which are synchronously acquired according to the one-to-one correspondence of each positioning point specifically comprises the following steps:
overlapping two positioning points in the global scanning point cloud image with the positioning points with the same coordinates in the image to finish the registration;
The superposition operation is specifically to introduce a global scanning point cloud image into the spliced image which is spliced in the geographic image software, and to perform at least one operation of translation, rotation and scaling on positioning points with the same coordinates under the established E so as to realize the superposition.
4. A method according to any one of claims 1-3, characterized in that the acquisition of the current building center point in S1-2 comprises the steps of:
p5, based on the registration image, extracting a series of feature images obtained by different convolution layers by using a VGG-16 algorithm without an added layer as a CNN backbone network, wherein the feature images are 1/2-1/10 of the size of an input image;
meanwhile, constructing a feature pyramid by using different layers of a CNN backbone network through an image pyramid algorithm FPN, predicting frames of a plurality of buildings,
p6, for each building in the plurality of buildings, obtaining a local feature map F of the building by utilizing a RoISlign algorithm for the feature map obtained by the series of different convolution layers and the frame of the corresponding building;
p7, adopting convolution layer processing to form a polygonal boundary cover M for the local feature map F of each building, and then utilizing the convolution layer processing to form P prediction vertexes of the boundary cover M;
And P8, selecting the point with the largest or smallest abscissa or ordinate among the P predicted vertexes as a first calibration point, if the same largest or smallest abscissa or ordinate exists, taking the point with the largest or smallest corresponding ordinate or abscissa as the first calibration point, carrying out distance calculation on the first calibration point and the rest P-1 points according to the path sequence of connecting the predicted points clockwise or anticlockwise, connecting the first calibration point with the point corresponding to the longest point, correspondingly selecting the other adjacent vertex with the shortest distance with the first calibration point predicted based on the boundary cover M as a second calibration point, connecting the point corresponding to the longest point with the boundary cover M in the same way, and obtaining the intersection point between the two connecting line segments as the current building center point of each building.
5. The method according to claim 1, wherein S3, in particular defining primitives and semantic expressions of different hierarchical models by means of the OGCCityGML standard, comprises in particular: using CityGML to define 5-level LoDs to carry out multi-scale expression on the building, wherein LoD0 expresses the bottom or roof contour plane of the building, is a 2.5D polygon, and corresponds to a third primitive; the LoD1 simply represents a three-dimensional model of the outer wall of the building by using a block shape, and corresponds to a second graphic primitive; the LoD2 adds the description of the accessory structure and roof of the house and the exterior texture of the building on the basis of the LoD1, and corresponds to the set of the first primitive, the second primitive and the fourth primitive; the LoD3 adds a detailed appearance structure for describing the building on the basis of the LoD2, wherein the detailed appearance structure comprises a door, a window and a balcony, and corresponds to a first primitive, a second primitive, a fourth primitive and a set of sixth primitives; the LoD4 increases the expression of the internal facilities of the building and the bottom surface of the building on the basis of the LoD3, and displays the expression by hiding the sixth primitive and at least part of the first primitive and the second primitive.
6. The method of claim 5, wherein the representations of the different primitives are mapped by logical operations between LoD tags.
7. An unmanned aerial vehicle for realizing the three-dimensional semantic model-based geographic entity multi-form expression method according to any one of claims 1 to 6, wherein the unmanned aerial vehicle carries a rotary oblique photographing device comprising five oblique photographing cameras with adjustable depression angles in the upper, lower, left and right directions, a main box containing a variable depression angle control mechanism, an image acquisition card, a signal and data wireless transmission device, and a rotary platform carrying all the cameras, wherein,
the rotary platform is assembled on the unmanned aerial vehicle, the main box top plate is provided with an interface which can be connected with a rotary connecting rod of the rotary platform, and the rotary platform is meshed with a motor output shaft gear carried on the unmanned aerial vehicle through internal teeth of the rotary platform so as to control the rotation and reverse rotation movement of the rotary oblique photographing device.
CN202210504279.1A 2022-05-10 2022-05-10 Geographic entity polymorphic expression method based on three-dimensional semantic model Active CN115272591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210504279.1A CN115272591B (en) 2022-05-10 2022-05-10 Geographic entity polymorphic expression method based on three-dimensional semantic model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210504279.1A CN115272591B (en) 2022-05-10 2022-05-10 Geographic entity polymorphic expression method based on three-dimensional semantic model

Publications (2)

Publication Number Publication Date
CN115272591A CN115272591A (en) 2022-11-01
CN115272591B true CN115272591B (en) 2023-09-05

Family

ID=83760326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210504279.1A Active CN115272591B (en) 2022-05-10 2022-05-10 Geographic entity polymorphic expression method based on three-dimensional semantic model

Country Status (1)

Country Link
CN (1) CN115272591B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258840B (en) * 2023-05-16 2023-08-11 深圳大学 Hierarchical detail representation tree generation method, device, equipment and storage medium
CN116863096B (en) * 2023-09-04 2023-12-05 四川云实信息技术有限公司 Geographic information three-dimensional display method and system based on oblique photography
CN117421373B (en) * 2023-09-05 2024-04-30 泰瑞数创科技(北京)股份有限公司 Method for converting artificial model into semantic model
CN116910131B (en) * 2023-09-12 2023-12-08 山东省国土测绘院 Linkage visualization method and system based on basic geographic entity database
CN117274063A (en) * 2023-10-31 2023-12-22 重庆市规划和自然资源信息中心 Working method for building central line layer construction of building

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004102474A (en) * 2002-09-06 2004-04-02 Mitsubishi Electric Corp Generator for three-dimensional building model data, generator for three-dimensional urban model data, method of generating three-dimensional building model data, program, compute-readable recording medium
CN105931294A (en) * 2016-04-19 2016-09-07 西南交通大学 Method for converting BIM entity model into multiple levels of details (LOD) GIS standardized model
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion
CN109117876A (en) * 2018-07-26 2019-01-01 成都快眼科技有限公司 A kind of dense small target deteection model building method, model and detection method
CN109918751A (en) * 2019-02-26 2019-06-21 华中师范大学 A kind of building three-dimensional Semantic Modeling Method based on CityGML extension
CN110807835A (en) * 2019-10-25 2020-02-18 南京工业大学 Building BIM model and live-action three-dimensional model fusion method
CN112862970A (en) * 2020-12-22 2021-05-28 中设数字技术股份有限公司 BIM model Layering (LOD) method based on three-dimensional mesh curved surface and solid entity
CN113066112A (en) * 2021-03-25 2021-07-02 泰瑞数创科技(北京)有限公司 Indoor and outdoor fusion method and device based on three-dimensional model data
CN113436319A (en) * 2021-07-01 2021-09-24 泰瑞数创科技(北京)有限公司 Special-shaped arrangement matrix construction method and system of urban indoor three-dimensional semantic model
CN113615204A (en) * 2019-03-20 2021-11-05 Lg电子株式会社 Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device, and point cloud data receiving method
CN113920266A (en) * 2021-11-03 2022-01-11 泰瑞数创科技(北京)有限公司 Artificial intelligence generation method and system for semantic information of city information model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK3223191T3 (en) * 2016-03-23 2021-08-16 Leica Geosystems Ag CREATING A 3D URBAN MODEL FROM Oblique IMAGE FORMATION AND LEADER DATA
CN109934914B (en) * 2019-03-28 2023-05-16 东南大学 Embedded city design scene simulation method and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004102474A (en) * 2002-09-06 2004-04-02 Mitsubishi Electric Corp Generator for three-dimensional building model data, generator for three-dimensional urban model data, method of generating three-dimensional building model data, program, compute-readable recording medium
CN105931294A (en) * 2016-04-19 2016-09-07 西南交通大学 Method for converting BIM entity model into multiple levels of details (LOD) GIS standardized model
CN108520557A (en) * 2018-04-10 2018-09-11 中国人民解放军战略支援部队信息工程大学 A kind of magnanimity building method for drafting of graph image fusion
CN109117876A (en) * 2018-07-26 2019-01-01 成都快眼科技有限公司 A kind of dense small target deteection model building method, model and detection method
CN109918751A (en) * 2019-02-26 2019-06-21 华中师范大学 A kind of building three-dimensional Semantic Modeling Method based on CityGML extension
CN113615204A (en) * 2019-03-20 2021-11-05 Lg电子株式会社 Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device, and point cloud data receiving method
EP3944625A1 (en) * 2019-03-20 2022-01-26 Lg Electronics Inc. Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
CN110807835A (en) * 2019-10-25 2020-02-18 南京工业大学 Building BIM model and live-action three-dimensional model fusion method
CN112862970A (en) * 2020-12-22 2021-05-28 中设数字技术股份有限公司 BIM model Layering (LOD) method based on three-dimensional mesh curved surface and solid entity
CN113066112A (en) * 2021-03-25 2021-07-02 泰瑞数创科技(北京)有限公司 Indoor and outdoor fusion method and device based on three-dimensional model data
CN113436319A (en) * 2021-07-01 2021-09-24 泰瑞数创科技(北京)有限公司 Special-shaped arrangement matrix construction method and system of urban indoor three-dimensional semantic model
CN113920266A (en) * 2021-11-03 2022-01-11 泰瑞数创科技(北京)有限公司 Artificial intelligence generation method and system for semantic information of city information model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUA Jie.LOD Methods Oflarge-scale Urban Building Models by GPU Accelerating.《2012第二届计算机科学与网络技术国际会议论文集》.2013,全文. *

Also Published As

Publication number Publication date
CN115272591A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN115272591B (en) Geographic entity polymorphic expression method based on three-dimensional semantic model
CN112150575B (en) Scene data acquisition method, model training method and device and computer equipment
CN108564647B (en) A method of establishing virtual three-dimensional map
Frueh et al. Automated texture mapping of 3D city models with oblique aerial imagery
WO2019239211A2 (en) System and method for generating simulated scenes from open map data for machine learning
Meng et al. 3D building generalisation
Toschi et al. Geospatial data processing for 3D city model generation, management and visualization
CN110060331A (en) Three-dimensional rebuilding method outside a kind of monocular camera room based on full convolutional neural networks
CN110992510A (en) Security scene VR-based automatic night patrol inspection method and system
CN109242966B (en) 3D panoramic model modeling method based on laser point cloud data
CN115438133B (en) Geographic entity geometric expression method based on semantic relation
CN110189405A (en) A kind of outdoor scene three-dimensional modeling method for taking building density into account
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
CN109889785A (en) A kind of dummy emulation method that the POI label based on unity is shown
US20100066740A1 (en) Unified spectral and Geospatial Information Model and the Method and System Generating It
Zhu et al. Structure-aware completion of photogrammetric meshes in urban road environment
Dorffner et al. Generation and visualization of 3D photo-models using hybrid block adjustment with assumptions on the object shape
CN115546422A (en) Building three-dimensional model construction method and system and electronic equipment
Zhu A pipeline of 3D scene reconstruction from point clouds
Zhang et al. Video surveillance GIS: A novel application
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
Shahabi et al. Geodec: Enabling geospatial decision making
Xu et al. Real-time panoramic map modeling method based on multisource image fusion and three-dimensional rendering
Kim et al. Using 3D GIS simulation for urban design
Huang et al. TPMT based Automatic Road Extraction from 3D Real Scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant