CN115438133A - Geographic entity geometric expression method based on semantic relation - Google Patents

Geographic entity geometric expression method based on semantic relation Download PDF

Info

Publication number
CN115438133A
CN115438133A CN202210493029.2A CN202210493029A CN115438133A CN 115438133 A CN115438133 A CN 115438133A CN 202210493029 A CN202210493029 A CN 202210493029A CN 115438133 A CN115438133 A CN 115438133A
Authority
CN
China
Prior art keywords
point
building
geographic entity
road
primitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210493029.2A
Other languages
Chinese (zh)
Other versions
CN115438133B (en
Inventor
刘俊伟
邬丽娟
杨文雪
曲冠晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terry Digital Technology Beijing Co ltd
Original Assignee
Terry Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terry Digital Technology Beijing Co ltd filed Critical Terry Digital Technology Beijing Co ltd
Priority to CN202210493029.2A priority Critical patent/CN115438133B/en
Publication of CN115438133A publication Critical patent/CN115438133A/en
Application granted granted Critical
Publication of CN115438133B publication Critical patent/CN115438133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a geographic entity geometric expression method based on semantic relation, which comprises the following steps: determining primitive elements for generating different spatial form data of a geographic entity in the geographic mapping data; establishing a geographic entity graphic primitive attribute storage field, establishing a geographic entity basic attribute storage field, wherein the basic attribute can uniquely identify and distinguish geographic entities; establishing an association table of the geographic entity basic attribute and the primitive element, wherein the association table comprises an association table between a geographic entity coding field in the geographic entity basic attribute and a primitive identification code of the corresponding primitive element; and searching the association table according to the geographic entity codes to obtain different space form primitive element identifiers associated with the target geographic entity, thereby realizing the geometric expression of the geographic entity. The method provides a big data base for safe and comprehensive analysis and research by searching the geospatial data through the associated table connected with the associated encryption table.

Description

Geographic entity geometric expression method based on semantic relation
Technical Field
The invention relates to a geographic entity geometric expression method, which is expressed as a correlation method of geographic entity geometry and attribute information, in particular to a geographic entity geometric expression method based on semantic relation, and belongs to the field of geographic entity modeling.
Background
The construction of the geographic entity is an important content for the construction of the digital city and the smart city. The geographic entity is composed of geometric form information indicating the appearance form of the entity, attribute description information indicating various semantic features of the entity, and relationship information indicating the relationship between the entity and other entities in the environment. It may be an objective geographic entity to express an objective presence, or a non-objective geographic entity to express an abstract definition. The geographic entity data is divided into: attribute data, geometric data, and relationship data. The attribute data describe attribute feature data of the spatial object, also referred to as non-geometric data; the geometric data describes spatial feature data of the spatial object; the relationship data describes the spatial relationship between spatial objects, typically expressed by a topological relationship. Wherein the geometric data is a geographical foothold of the geographic entity.
The geographic entity space identity code is an identification code suitable for basic geographic entity management and application, and has the characteristics of realizing global exclusive identification, unique identification, information association sharing and the like of the basic geographic entity. The spatial identity code is constructed, so that the organization, processing, analysis, transmission and application efficiency of basic geographic entity data can be effectively improved, the standardized and standardized management of the basic geographic entity is realized, and the mapping geographic information service which is better in use and more convenient is practically provided for various applications. On the basis, the geographic entity can be used as a core, the unique identification code is used as an index, structured and unstructured multi-dimensional diversified information is carried, and the information is used as a link which is associated and fused with other information.
At present, geometric expressions of geographic entities can only express spatial position relations among the entities, and attribute association among the entities cannot be accurately described. Therefore, it is necessary to research a geographic entity geometric expression method based on semantic relationship, and the geometric expression of the geographic entity is realized by taking the unique identification code as an index.
When the geographic entity is used as a business, or in geographic entities, especially some key security areas, the geographic entities are generally constructed and often displayed by using encrypted information transmission and decryption, so that the encryption is divided into geographic entity encryption and geographic entity attribute encryption. In the prior art, the encryption of the video through video stream shooting generates an encryption effect on the geographic entity, so that a thief cannot obtain the video or can accurately know the spatial position of the geographic entity only by complicated decoding if the video is obtained. However, when attributes are attached, encryption is a critical issue in the context of geographic entity and attribute association, since attributes carry more detailed and critical confidential information. If the attribute is semantically associated with the geographic entity, the attribute is incidentally retrieved once the geographic entity is decoded. How to cut off the association between the geographical entity and the attribute, or set an association barrier to the association, is the last line of defense after the geographical entity is decoded accidentally.
Disclosure of Invention
In order to solve the above technical problem, the present invention is considered from the following aspects, firstly, a proprietary field is established in consideration of respective encoding of a geometric element and an attribute element of a geographic entity, secondly, association setting between the two fields is considered, and thirdly, a cut-off password is set for the association setting, so that decoding needs to be cut off when attribute information of a target geographic entity is required to be retrieved in association.
Based on the consideration, the invention provides a geographic entity geometric expression method based on semantic relation, which is characterized by comprising the following steps:
s1, determining primitive elements for generating different spatial form data of a geographic entity in geographic mapping data;
s2, establishing a geographic entity primitive attribute storage field, wherein the geographic entity primitive attribute storage field comprises primitive basic attributes such as primitive identification codes, primitive codes and primitive names and the like, and primitive special attributes which are specific attribute fields specified according to different geographic entity primitive expression contents and characteristics;
s3, establishing a basic attribute storage field of the geographic entity, wherein the basic attribute can uniquely identify and distinguish the geographic entity, and the basic attribute comprises geographic entity codes, classification codes, names, generation time, change time and death time;
s4, storing corresponding field attributes;
s5, establishing an association table of the geographic entity basic attribute and the primitive element, wherein the association table comprises an association table between a geographic entity coding field in the geographic entity basic attribute and a primitive identification code of the corresponding primitive element;
and S6, retrieving the association table according to the geographic entity codes to obtain different space form primitive element identifiers associated with the target geographic entity, thereby realizing the geometric expression of the geographic entity.
Preferably, at S5, establishing an associated encryption table, the associated encryption table establishing a mapping relationship between the geographic entity primitive and the associated table, forming a mapping relationship of the geographic entity primitive ← associated encryption table ← geographic entity attribute.
With respect to S1
S1 specifically comprises the following steps:
s1-1, acquiring a ground overhead view image map by using unmanned aerial vehicle aerial photography or satellite remote sensing photography, synchronously acquiring an aerial LIDAR cloud point map, and then acquiring a ground building structure software drawing;
s1-2, constructing an artificial intelligent network model of a ground road and a building based on a ground overhead image map, and determining central points of a road part and a building part;
s1-3, carrying out image registration on a ground overhead image map, an aviation LIDAR point cloud map and a software drawing to form a model three-dimensional monomer set of three image layers of a overhead image layer, a point cloud image layer and a drawing layer;
s1-4, determining primitive elements for generating different spatial form data of the geographic entity, wherein the primitive elements comprise road segments and building roofs, the building bases, building walls, floors and windows are stripped from an overhead image map, the elevation information of the central point determined in S1-2 is obtained from a point cloud map layer, so that the roads and the building bases are determined as root primitives, the building roofs and the walls are determined as main primitives, the floors and the windows are determined as component primitives, two-dimensional primitives are formed together, and three-dimensional primitives formed by a model three-dimensional monomer and an inclined three-dimensional monomer are further included.
The root graph element is a geometric figure which can completely express spatial features of the basic geographic entity in the aspects of spatial position, ownership management and the like and corresponds to the basic geographic entity one by one;
the main graphic elements refer to other geometric figures except the root graphic element and capable of completely expressing the spatial characteristics of the basic geographic entity.
A building block primitive refers to a geometric figure that is only partially capable of expressing the spatial characteristics of an underlying geographic entity.
The root primitives, the body primitives and the member primitives are all used for constructing basic geographic entities, each basic geographic entity must have one root primitive but not necessarily has the body primitives and the member primitives, and the body primitives and the member primitives are supplementary expressions of the root primitives.
The method for acquiring the ground overhead view image by using the unmanned aerial vehicle aerial photography in the S1-1 comprises the following steps:
s1-1-1, selecting at least one specified area, and finding out a circumscribed rectangle of the at least one specified area;
s1-1-2, setting a flight route of the unmanned aerial vehicle and an exposure time point of an aerial photography device of the unmanned aerial vehicle based on the external rectangle;
and S1-1-3, flying the unmanned aerial vehicle according to the flying route, and simultaneously carrying out image acquisition according to the exposure time point to obtain a plurality of ground overhead image maps.
Wherein, the setting mode of the exposure time point is as follows: setting the image acquisition range of the unmanned aerial vehicle on the flight route as a rectangular region R, after the current image is acquired, selecting the image acquisition range as the next image exposure time point when the unmanned aerial vehicle flies over the width distance of the R in the flight direction of the range R, turning the unmanned aerial vehicle when the upper boundary of the R is overlapped with the upper boundary of the specified region or the upper boundary of the R is beyond the upper boundary of the specified region in the flight direction, moving the R by a length to the left, flying the R by a distance to the reverse direction, continuously acquiring the image, selecting the exposure time point to be consistent with that in the forward flight, and when the lower boundary of the R is overlapped with the lower boundary of the specified region or the lower boundary of the R is beyond the lower boundary of the specified region in the flight direction, turning the unmanned aerial vehicle again, moving the R by a length to the right, continuously acquiring the image again in the forward direction, wherein the selection mode of the exposure time point is unchanged, and repeating the steps can finish the acquisition of the whole area of the specified region.
Acquiring a ground plan image map by using satellite remote sensing shooting comprises acquiring three-dimensional primitives of an inclined three-dimensional monomer by using an oblique shooting technology. The synchronous acquisition of the aviation LIDAR point cloud picture in the S1-1 comprises starting global scanning of at least one specified area at the time point of the first exposure of the unmanned aerial vehicle to obtain a global scanning point cloud picture, or carrying out global scanning on the at least one specified area synchronously with the satellite remote sensing exposure time when satellite remote sensing shooting is adopted.
Wherein S1-2 specifically comprises
S1-2-1, calling the acquired ground overhead view image map, and realizing the setting of a plurality of road center points by adopting a node-labeled RNN recurrent neural network algorithm;
s1-2-2, establishing a city building network model by adopting an artificial intelligent network and acquiring a city building central point.
Wherein S1-2-1 specifically comprises:
s1-2-1-1, calling the acquired ground plane overhead image map, establishing a unified rectangular coordinate system E of the at least one specified region, generating road continuous nodes through a node generator of an RNN (recurrent neural network) algorithm comprising an encoder and a decoder, connecting the two nodes before and after generation in the generation process, inputting the new generated nodes into the node generator to continuously generate new nodes, continuing connecting the new nodes generated by straight-line segments to form a road center line, and circularly connecting the nodes to form a road network;
s1-2-1-2, widening all line segments in a road network according to a preset width W to form a road width line with a certain width, and thus obtaining an urban road network model, wherein W is widened according to the corresponding road width in the ground top view image map, and is 0.5-0.8 times of the actual road width value W corresponding to the road section where the road node in the ground top view image map is located;
s1-2-1-3, for each node in S1-2-1-1, selecting a corresponding node as a corresponding marked node in the widening range of w in S1-2-1-2 in a bypass mode, and defining the node as a road center point, wherein the bypass selection method specifically comprises the following steps: making a straight line perpendicular to the central line of any road on two sides of the corresponding road node and the boundary of the broadening w intersect at two intersection points, selecting any intersection point, and selecting a preset distance r away from the selected intersection point on the straight line and within the broadening range, namely the preset distance is
Figure BDA0003632299550000031
Wherein S1-2-2 specifically comprises:
s1-2-2-1, based on the ground top view image map, extracting a series of feature maps obtained by different convolutional layers by using a VGG-16 algorithm without an additional layer as a CNN main network, wherein the feature maps are 1/2-1/10, preferably 1/8, of the size of an input image;
meanwhile, a characteristic pyramid is constructed by using different layers of a CNN main network through an image pyramid algorithm FPN, and the frames of a plurality of buildings are predicted,
s1-2-2-2, for each building in a plurality of buildings, obtaining a local feature map F of the building by using a RoIAlign algorithm on the feature maps obtained by the series of different convolutional layers and the corresponding frame of the building;
s1-2-2-3, forming a polygonal boundary cover M by adopting convolution layer processing on the local feature map F of each building, and forming P predicted vertexes of the boundary cover M by utilizing convolution layer processing;
s1-2-2-4, selecting a point with the maximum or minimum abscissa or ordinate in P predicted vertexes as a first calibration point, if the same maximum or minimum abscissa or ordinate exists, selecting the corresponding point with the maximum or minimum ordinate as the first calibration point, performing distance calculation between the first calibration point and the rest P-1 points according to the path sequence of clockwise or anticlockwise connection predicted points, connecting the first calibration point with the point corresponding to the longest distance of the first calibration point, correspondingly selecting another adjacent vertex with the shortest distance from the first calibration point based on the prediction of the boundary cover M as a second calibration point, and connecting the other adjacent vertex with the point corresponding to the longest distance of the second calibration point in the same way to obtain an intersection point between two connecting line segments as a primary central point of each building;
s1-2-2-5, taking a preliminary central point of each building as a circle center, taking a preset radius r as a circle, starting with a point on a circumference which is parallel to an X axis and is connected with the circle center, taking a preset angle as a stepping unit, searching a plurality of circumferential points on the circumference clockwise or anticlockwise, judging whether the plurality of circumferential points are all in the boundary cover M, if so, defining that the circular area completely covers the building, and not performing forced offset movement on the preliminary central point, wherein the preliminary central point is the building central point;
if not, expanding r according to a preset step distance to expand the range of the circle domain, continuously searching a new group of multiple circumferential points on the circumference of the expanded circle by taking a preset angle as a stepping unit, and judging whether the new group of multiple circumferential points are all in the boundary cover M; if not, circularly expanding a circular domain and judging whether more new groups of a plurality of circumferential points are all in the boundary cover M, stopping circulation until one circumferential point is found in the boundary cover M, forcibly shifting the central point to the circumferential point for superposition, and taking the circumferential point after forced shifting as the building central point;
the method for judging the interior comprises the following steps: it is only necessary to determine whether or not the coordinates of the circumferential point belong to the coordinates inside the boundary cover M.
Preferably, the preset radius r is 1-3m converted to an actual size according to a scale in the registered image, the preset angle is 1 second-1 degree, and the preset step distance is 1-9m.
Preferably, the preset angles used before and after the circle domain expansion and for searching the plurality of circumferential points in each circle domain expansion are the same or at least the two are different.
S1-3 specifically comprises the following steps:
s1-3-1, selecting a ground overhead image, a global scanning point cloud picture and a software drawing to determine positioning points;
s1-3-2, splicing the collected multiple ground overhead view images according to the upper exposure time sequence of the flight route to obtain a spliced image, and sequentially and correspondingly superposing the aerial LIDAR point cloud images, the spliced image and the software drawing which are collected synchronously according to positioning points one by one;
s1-3-3, vertically translating and splicing the boundary cover M of each building in the image map to the elevation of each building top surface along the Z axis of an E coordinate system by the current central point according to the elevation information of the building top surfaces in the point cloud image so as to finish registration;
the method comprises the following steps that in S1-3-1, two first positioning points are respectively set on the ground top-view image map and the corresponding point cloud map, the center point of each building is given to the building in the corresponding software drawing, the center point of the building and one vertex of the roof of the building are used as the first positioning points of the software drawing, the coordinates of the first positioning point of each of the ground top-view image map and the corresponding point cloud map under E are the same as the coordinates of one first positioning point of the rest of the buildings under E, and in addition, the center point of each building in the spliced image map and one predicted point, corresponding to one vertex of the roof of the building, on a boundary cover M need to be set as second positioning points;
when the unmanned aerial vehicle is used for aerial photography in S1-3-1, one of the first positioning points in the image map is projected on the XOY plane of E together with the position point of the unmanned aerial vehicle at the corresponding exposure time point, and the other positioning point is selected to be projected on the XOY plane of E together with one vertex of R.
It can be understood that, from the perspective of the image, one of the first positioning point and the second positioning point coincides with the position point where the drone is located at the corresponding exposure time point, and the other of the first positioning point and the second positioning point coincides with a vertex of R. The specific coordinates therefore appear such that their respective projections onto the XOY plane of E coincide.
In S1-3-2, before the plurality of acquired image maps are spliced according to the upper exposure time sequence of the flight route, deleting image parts exceeding a specified area;
the aerial LIDAR point cloud picture and the spliced image picture which are synchronously collected are sequentially coincided one by one according to each first positioning point, and the method specifically comprises the following steps:
performing coincidence operation on two first positioning points in the corresponding point cloud images and two positioning points with the same coordinate in the ground top-view image, and then performing coincidence operation on a software drawing center point and one vertex of a building roof and a corresponding second positioning point in the spliced image respectively;
the registration operation is specifically to introduce a corresponding point cloud image and a corresponding software drawing into the spliced image map spliced in the geographic image software, perform at least one of translation, rotation and scaling on a first positioning point with the same coordinate to realize registration under the established condition E, and then perform at least one of translation, rotation and scaling on a central point of the software drawing and a vertex of the building roof to realize registration with two second positioning points (namely, the central point of each building and a predicted point on the boundary cover M corresponding to the vertex of the building roof) in the spliced image map.
In the step S1-4, the road network model and the building boundary cover M established in the steps S1-2-1 and S1-2-2 and the outline boundaries of the building base, the building wall, the floor and the window in the software drawing are used as the basis for stripping each primitive element, and are used for establishing a geographic entity primitive attribute storage field and a geographic entity basic attribute storage field in each geographic entity within the boundary range.
Some explanations regarding S2, S3, S5
The specific attribute field established in the S2 comprises the proprietary attributes of the root map element, including the road building type, the road and building structure, the building layer number, the road and the building use; fifth facade (roof face) component primitive specific attributes including type, structure, top elevation, bottom elevation, etc.
In S3, in order to ensure uniqueness, scientificity, and scalability of the coding, the coding rule must be determined with reference to the existing national standard and the existing industry standard. There should not be only a simple unique correspondence between a geographic entity and its code.
Therefore, important attribute features of the geographic entity, including spatial position, entity type, non-spatial attribute, intrinsic or extrinsic relationship, and application attribute, can be analyzed in the geographic entity code in S3. The application attributes include: identity marks, use marks, time marks, type marks and the like.
In S5, the geographic Entity establishes an association with the primitive element through the association table, for example, the ID1 attribute of the primitive is associated with the ID1 field of the association table, and the Entity attribute _ GUID attribute is associated with the Entity _ GUID field of the association table. When the geographic entity is expressed according to the primitive data, the geometric data can be retrieved according to the unique identification field information.
In one embodiment, at S5, there is further included establishing an associated encryption table that establishes a mapping between the geographic entity primitive and the associated table, forming a mapping of the geographic entity primitive ← associated encryption table ← geographic entity attribute.
Wherein the associated encryption table is established by:
s5-1 takes the road center point corresponding to the S1-2-1-3 road node as the center of a circle and takes the preset radius r i I is a road segmented code, a circle is made, a plurality of circumferential points are searched clockwise or anticlockwise on the circumference by taking a connecting line with the circle center as a point on the circumference parallel to the X axis and taking a preset angle as a stepping unit, the coordinates of the plurality of circumferential points are arranged according to the searching sequence or the first preset sequence, and a road password string is formed
Figure BDA0003632299550000041
S5-2 for each building preliminary center point formed by S1-2-2-5, to correspond when a circumferential point is found to be inside the bounding box MCircle as center of circle, corresponding radius R i I is a corresponding building base primitive code, starting from a point on a circumference which is parallel to an X axis and is connected with a circle center, clockwise or anticlockwise searching a plurality of circumferential points on the circumference by taking a preset angle as a stepping unit, and arranging the coordinates of the plurality of circumferential points according to the searching sequence or a second preset sequence to form a building password string
Figure BDA0003632299550000042
S5-3 road password string
Figure BDA0003632299550000051
And building cipher string
Figure BDA0003632299550000052
And respectively encrypting each road segment and the building primitive elements to form an associated encryption table, so that the associated table can be successfully associated only by carrying out password string decoding on the retrieved target primitive when the primitive elements are retrieved through the geographic entity codes.
The form of the associated encryption table is a road password string
Figure BDA0003632299550000053
And building cipher string
Figure BDA0003632299550000054
And mapping and corresponding to the road segments and the building primitive elements respectively to form a data table.
Advantageous effects
The geographic entity and the basic attribute form an incidence relation, so that the details of the geographic entity are comprehensively retrieved, analyzed and researched, and a big data basis is provided. By encrypting the set of associated tables, eventual privacy of the information of primitives and attributes is enhanced for the retrieval of manufacturing barriers.
Drawings
Figure 1 is a schematic diagram of a method for obtaining geographic entity data of a plurality of specified areas in city a according to embodiment 1 of the present invention,
FIG. 2 is a schematic diagram of an RNN recurrent neural network algorithm process and an urban road network generation process of the present invention,
FIG. 3a is a schematic diagram of the local road network within the circle of FIG. 2 showing the widening of the segment represented by the node C encircled in the road, i.e. the direction of the selected side-opening from the road center point,
FIG. 3b is a partial enlargement of the vicinity of the road node C circled in FIG. 3a, and the node C represents the establishment of the segmented entity business data point, and the road center point plus the password string S ri A schematic drawing of a desired circle of specified radius 1m is formed,
FIG. 4 is a schematic diagram of the extraction of a multi-layer RNN building boundary cover M based on a CNN backbone network convolution long-short term memory ConvLSTM and the vertex prediction based on the building boundary cover M,
figure 5 is a schematic diagram of the processing method of the non-forced and forced offset movements based on the building centre points of the building S1 and the concave building S1 and the determination of the second anchor point,
FIG. 6 is a schematic diagram of the registration process of the software drawing and the buildings S1 and S2 in the mosaic image,
FIG. 7 is a schematic diagram of an associated encryption table formation process,
FIG. 8 is a schematic diagram of the association table forming process and the process of requiring association decoding when querying a certain building primitive element by geoentity encoding in embodiment 4 of the present invention,
FIG. 9 is a flowchart of a geographic entity geometric expression method based on semantic relationships according to the present invention.
Detailed Description
Example 1
The embodiment describes a method for acquiring a ground overhead view image and synchronously acquiring an aerial LIDAR point cloud image by using unmanned aerial vehicle aerial photography or satellite remote sensing photography in S1.
As shown in fig. 1, the city a is divided into a plurality of defined regions, including a rectangular region with color filled in the lower right corner, and a circular, elliptical, pentagonal, and two strip-shaped defined regions, wherein the circular and elliptical shapes respectively obtain the minimum external moment, and the polygonal shape moves in parallel to approach the pentagonal shape through four sides of a rectangle, and stops approaching when an intersection is detected, and also forms an external moment. The circumscribed moments for circles, ellipses and pentagons in the figure all indicate the forward direction of the flight direction of the drone.
In this embodiment, a rectangular area at the lower right corner is taken as an example of the first specified area, an enlarged image is arranged below the first specified area, R is an image acquisition range R of the rectangular unmanned aerial vehicle, and the rectangular area is composed of 48R, and t is the enlarged image 0 、t 1 、t 11 、…、t 47 Is taken as the exposure time point sequence, from the initial t in the direction of the arrow 0 The initial exposure is started at the moment, there is a case where the lower and left boundaries of the R rectangle (indicated by a green frame slightly exceeding the first prescribed region for clarity) just coincide with the lower and left boundaries of the first prescribed region, and the boundary (at least one of the lower and left boundaries) exceeding the first prescribed region is within the range of the blue frame outside the first prescribed region, and t is t when flying over the width distance of one R rectangle 1 At the moment, the second exposure is carried out until the flying reaches t 11 When the boundary of the rectangular area is reached to be close to the border of the rectangular area, carrying out 12 th exposure, wherein the conditions that the upper boundary and the left boundary (indicated by a green frame slightly exceeding the same) of the R rectangle coincide with the upper boundary and the left boundary of the first specified area, or the boundary (at least one of the upper boundary and the left boundary) exceeding the first specified area is within the range of a blue frame outside the first specified area exist, at the moment, the unmanned aerial vehicle turns, moves leftwards (by taking the flight forward direction of the unmanned aerial vehicle as a reference) by the length of the R rectangle according to the direction of an arrow, then continuously flies in the reverse direction according to the arrow in the figure, carries out exposure and image acquisition in a manner of selecting the same exposure time point, and till the last reverse flight is finished and t is acquired 47 And exposing the acquired 48 th image map at the moment to finish the image acquisition of the first specified area.
For circular, elliptical and pentagonal regional image stimulation, the method can be completed based on external connecting moments and a specified positive direction.
At the same time LIDAR is at the initial t 0 And acquiring a global scanning point cloud picture of the first specified area by scanning the first specified area in the global mode.
Example 2
The true bookAn example is given to explain a method for acquiring a road center point and a building center point, as shown in fig. 2, a ground top view image map in embodiment 1 is called, based on the ground top view image map, an RNN recurrent neural network algorithm is used to define a step length l (selected from 1-5m according to the total length of a road) and a vector direction r as an attribute vector V, and each start node and K incident road passing directions are taken as
Figure BDA0003632299550000061
The points of (A) are used as input points (K initial attribute vectors correspond to K points and the corresponding initial points), K +1 input points and the attribute vector V are input into an encoder, and a decoder generates a new node; in particular for each direction of each starting point
Figure BDA0003632299550000062
Corresponds to the coordinates under E
Figure BDA0003632299550000063
The attribute vector V corresponds to a coordinate increment
Figure BDA0003632299550000064
Where t represents the sequence number of the current input point (0 for the starting point and 1 for the first new input point), the coordinate and attribute vector V are input to the encoder, and the decoder will emit the new node generated under E
Figure BDA0003632299550000065
Wherein
Figure BDA0003632299550000066
Fig. 2 is an exemplary road network generation process for every 20 node generation cycles, for a total of 100 node generation cycles, where straight line segments connect road nodes to form a road centerline as shown in fig. 3 a;
fig. 3a is a schematic diagram of the local road network widening within the circle in fig. 2. And taking a road node C as a representative, widening the local road network of FIG. 3a to two sides along a road center line of the generated road network according to a preset width w to form a road width line with a certain width, thereby obtaining an urban road network model, wherein w is 0.8 times of the road width defined by the actual road boundary in the ground overhead image map, and forming a widened boundary.
And the aisle road node C makes a perpendicular line perpendicular to the center line of any road on two sides of the corresponding road node shown in FIG. 3b and a boundary of the widening w intersect at two intersection points, one of the intersection points is selected, and a point which is away from the selected intersection point by a preset distance and is located in the widening range w is selected as a plurality of road center points on the perpendicular line. The tangent point existing between the circle C as the center of the circle and the widening boundary is the preset distance of the radius of the circle C, and then the road center point in fig. 3b is obtained.
And then, establishing a city building network model by adopting an artificial intelligent network and acquiring a city building central point. The method specifically comprises the following steps:
as shown in fig. 4, based on the step-called ground top view image, a VGG-16 algorithm without an added layer is used as a CNN backbone network to extract a series of feature maps obtained by different convolutional layers, wherein the feature maps are 1/8 of the size of the input image;
meanwhile, a characteristic pyramid is constructed by using different layers of a CNN main network through an image pyramid algorithm FPN, and the frames of a plurality of buildings are predicted,
for each building in the plurality of buildings, obtaining a local feature map F of the building by using a RoIAlign algorithm on the feature maps obtained by the series of different convolutional layers and the corresponding frame of the building;
and (3) forming a polygonal boundary cover M by adopting convolutional layer processing for the local feature map F of each building, and forming 5 predicted vertexes a, b, c, D, D2 and D1 of the boundary cover M by utilizing convolutional layer processing.
As shown in fig. 5, the X-axis direction of the coordinate system E is taken as a reference, a point D1 with the largest abscissa among the 5 predicted vertices is selected as a first calibration point, distance calculation is performed between the calibration point and the remaining 4 points in the order of a path connecting the predicted points counterclockwise, and the calibration point is connected to a point b with the longest distance therebetween, another adjacent vertex D2 with the shortest distance from the first calibration point predicted based on the bounding box M is correspondingly selected as a second calibration point, and the second adjacent vertex is connected to a point a with the longest distance therebetween in the same manner, and an intersection point X (shown in an enlarged manner at the bottom in S1) between the two connecting line segments is obtained as the center point of the building.
For a building with a concave roof, the predicted points are D3, e, f, g, h, i, j, D4,8 predicted points, as shown in fig. 5, the point D3 with the largest abscissa among the 8 predicted vertices is selected as a first calibrated point, the distances between the calibrated points and the remaining 7 points are calculated in the order of the paths connecting the predicted points counterclockwise, and the point f with the longest distance is connected to the calibrated point, another adjacent vertex D4 with the shortest distance from the first calibrated point predicted by the boundary cover M of the building with the concave roof is correspondingly selected as a second calibrated point, and the point e with the longest distance from the second calibrated point is connected in the same manner, so that an intersection x' (shown in an enlarged manner at the bottom in S2) between two connecting line segments is obtained as the center point of the building.
As shown in fig. 5, taking a building S1 with 5 predicted points as an example, a circle X is formed by taking a center point X obtained in example 3 as a center, taking a preset radius r =1m as a circle X, and taking a line connecting the center point X and the circle X as a point on a circumference parallel to the X axis 1 Initially, a plurality of circle points are searched counterclockwise on the circle by using a preset angle of 1 degree as a stepping unit (wherein one search point x is exemplarily given n ) And judging whether the plurality of circumferential points are all in the boundary cover M (shown in figure 4) or not, and if so, defining that the circular area completely covers the building. It is evident that the circles x in S1 are all within the bounding box M, and that no forced offset movement is made to said centre point x.
And for the concave building S2, the preset radius starts from r =3m, and a point X on a circumference which is parallel to the X axis and is connected with the circle center X' is used as a line 2 (shown in the lower enlarged view in S2 in fig. 5) a plurality of circumferential points are searched counterclockwise on the circumference with the preset angle of 1 degree as a step unit, and it is determined that none of the plurality of circumferential points is inside the bounding box M (shown in fig. 6). Then, r is expanded by taking the preset step pitch 9m as an example to expand the circle domain range, and a new group of multiple circle points (of which one search point y is exemplarily given) are continuously searched on the circumference of the circle x' by taking the preset angle 1 degree as a stepping unit n ) Judging that the line connecting the center x' is parallel to the center xA point y on the circumference of the enlarged circle on the X-axis is inside the bounding box M (shown in fig. 6) and the center point X' is forcibly shifted to coincide with the circumferential point y.
Example 3
In this embodiment, the images in the specified area aerial photographed by the unmanned aerial vehicle are analyzed to be registered with the LIDAR cloud point image and the software image, so as to obtain a registered image of the specified area, establish a three-dimensional monomer set of the model, and determine a specific implementation mode of the primitive elements.
Still taking the first defined area of example 1 as an example, as shown in fig. 1, a spatial rectangular coordinate system E of the defined area of city a is established, and the X-axis and the Y-axis are respectively parallel to the adjacent rectangular sides of the first defined area.
With t 0 The position that unmanned aerial vehicle is located and the lower right corner vertex of this image when the exposure moment of a shadow image of moment is first fixed site, and two points of the same coordinate with first fixed site in the image under E in the universe scanning point cloud picture are first fixed site in this universe scanning point cloud picture.
The collected 48 images are subjected to an upper exposure time sequence t according to the flight path 0 、t 1 、t 11 、…、t 47 And splicing to obtain a spliced image map.
And (3) introducing a global scanning point cloud image and the software drawings of buildings S1 and S2 in the figure 6 into the spliced image which is spliced in the geographic image software, and performing translation, rotation operation and scaling on first fixed sites of the spliced image and the global scanning point cloud image with the same coordinate under the established E to realize coincidence and complete registration of the spliced image and the global scanning point cloud image.
Then, referring to fig. 6, the center points of the corresponding buildings in embodiment 2 are given to the corresponding buildings S1 and the concave buildings S2 in the software drawing, and a vertex of the roof of the building is taken as a first positioning point of the software drawing; referring to fig. 5, the center points x and y of the building S1 and the concave building S2 in the stitched image in example 2 and one predicted vertex D2 and D4 of the building roof are defined as a second anchor point. As shown in fig. 6, the center point of the software drawing and one vertex of the roof of the building are respectively overlapped with the center point of the buildings S1 and S2 in the stitched image and the second positioning point through translation, rotation and scaling.
Finally, determining primitive elements for generating different spatial form data of the geographic entity, wherein the primitive elements comprise road segments and a building roof which are stripped from the overhead image, a building base, a building wall, floors and windows which are stripped from the image layer, and the elevation information of the central point determined in the embodiment 2 is obtained from the point cloud image layer, so that the road and the building base are determined as root primitives, the building roof and the wall are determined as main primitives, the floors and the windows are determined as component primitives, two-dimensional primitives are formed together, and the three-dimensional primitives formed by the model three-dimensional monomer and the inclined three-dimensional monomer are also included.
The road network model and the building boundary cover M established in the embodiment 2, and the outline boundaries of the building base, the building wall, the floor and the window in the software drawing are used as the basis for stripping each primitive element, and are used for respectively establishing a geographic entity primitive attribute storage field and a geographic entity basic attribute storage field in each geographic entity within the boundary range.
Example 4
This embodiment describes a method for creating an association cipher table.
First, as shown in fig. 3b, the center point of the road corresponding to the road node C is taken as the center of the circle, and the preset radius r is taken as the center of the circle C =1m as a circle, see fig. 7, which has a line connecting the centers of the circles as a point c on the circumference parallel to the X axis 1 Initially, 361 circumferential points (one of which, c, is shown for example) are searched clockwise on a circle in increments of a preset angle 1 DEG k ) Arranging the coordinates of 361 circumference points according to the searching order to form the road password string
Figure BDA0003632299550000071
As shown in fig. 7, the center point of the road corresponding to the center point x of the building S1 is taken as the center of the circle, and the preset radius R is used i Coding each building S1 base primitive by using =1m, i, making a circle X, starting from a point on a circumference which is parallel to an X axis and is connected with a circle center, and taking a preset angle of 1 DEG as a stepping unitA plurality of circumferential points are searched counterclockwise on the circumference. According to example 3, the circle x is inside the boundary casing M of the building S1, so that the coordinates of 361 circumferential points are arranged in the order of search to form the road code string
Figure BDA0003632299550000072
For the center point x' of the concave building S2, the center point is centered on the corresponding circle (radius 12M) when the circumferential point y is found inside the bounding box M, the corresponding radius R as in example 2 i Using a circle with a circle center connecting line parallel to the X axis, starting with a point y on the circle, using a preset angle of 1 degree as a stepping unit, searching 361 circumference points on the circle in a counter-clockwise manner, arranging the coordinates of the circumference points according to the searching sequence to form a building code string
Figure BDA0003632299550000073
Road cipher string
Figure BDA0003632299550000074
And building cipher string
Figure BDA0003632299550000075
And encrypting the road segments and the building primitive elements to form an associated encryption table, so that the associated table can be successfully associated only by decoding the retrieved target primitive when the associated table retrieves the primitive elements through the geographic entity codes.
As shown in FIG. 8, a building includes a three-dimensional graph of three-dimensional space form, the ID1 attribute of a primitive represented by a point cloud picture or an inclined three-dimensional single body is associated with the ID1 field of an association table, two plane graphs of two-dimensional space form, including the ID2 attribute of a main body primitive of the building in the software drawing of the building of embodiment 3 is associated with the ID2 field of the association table, and the ID3 attribute of a root primitive of the building is associated with the ID3 field of the association table
And associating the Entity attribute-GUID attribute corresponding to the Entity attribute structure with the Entity-GUID field of the association table, associating the element attribute field with the Entity attribute structure field to form a relation between a spatial form and the attribute, and making an association table. When the geographic entity code in the attribute field is adopted to try to inquire the primitive attribute information of the building through the manufactured association table, the association request triggers the optical link cipher table to require decryption, and if the cipher string related to the building cannot be known, the primitive attribute information of the primitive attribute field cannot be obtained through decryption.
Summarizing the geographic entity geometric expression method based on semantic relationship, as shown in fig. 9, the method comprises the steps of S1 determining primitive elements for generating different spatial form data of a geographic entity in geographic mapping data;
s2, establishing a geographic entity primitive attribute storage field, wherein the geographic entity primitive attribute storage field comprises primitive basic attributes such as primitive identification codes, primitive codes and primitive names and the like, and primitive special attributes which are specific attribute fields specified according to different geographic entity primitive expression contents and characteristics;
s3, establishing a basic attribute storage field of the geographic entity, wherein the basic attribute can uniquely identify and distinguish the geographic entity, and the basic attribute comprises geographic entity codes, classification codes, names, generation time, change time and death time;
s4, storing corresponding field attributes;
s5, establishing an association table of the geographic entity basic attribute and the primitive element, wherein the association table comprises an association table between a geographic entity coding field in the geographic entity basic attribute and a primitive identification code of the corresponding primitive element;
and S6, retrieving the association table according to the geographic entity codes to obtain different space form primitive element identifiers associated with the target geographic entity, thereby realizing the geometric expression of the geographic entity.

Claims (14)

1. The geographic entity geometric expression method based on the semantic relation is characterized by comprising the following steps of:
s1, determining primitive elements for generating different spatial form data of a geographic entity in geographic mapping data;
s2, establishing a geographic entity primitive attribute storage field, wherein the geographic entity primitive attribute storage field comprises primitive basic attributes such as primitive identification codes, primitive codes and primitive names and the like, and primitive special attributes which are specific attribute fields specified according to different geographic entity primitive expression contents and characteristics;
s3, establishing a basic attribute storage field of the geographic entity, wherein the basic attribute can uniquely identify and distinguish the geographic entity, and the basic attribute comprises geographic entity codes, classification codes, names, generation time, change time and death time;
s4, storing corresponding field attributes;
s5, establishing an association table of the geographic entity basic attribute and the primitive element, wherein the association table comprises an association table between a geographic entity coding field in the geographic entity basic attribute and a primitive identification code of the corresponding primitive element;
and S6, retrieving the association table according to the geographic entity codes to obtain different space form primitive element identifiers associated with the target geographic entity, thereby realizing the geometric expression of the geographic entity.
2. The method of claim 1 further comprising at S5 establishing an associated encryption table that establishes a mapping between the geographic entity primitive and the associated table, forming a mapping of the geographic entity primitive ← associated encryption table ← associated table ← geographic entity attribute.
3. The method according to claim 1, wherein S1 specifically comprises:
s1-1, acquiring a ground overhead view image map by using unmanned aerial vehicle aerial photography or satellite remote sensing photography, synchronously acquiring an aerial LIDAR cloud point map, and then acquiring a ground building structure software drawing;
s1-2, constructing an artificial intelligence network model of a ground road and a building based on a ground overhead image map, and determining central points of a road part and a building part;
s1-3, carrying out image registration on a ground overhead image map, an aviation LIDAR point cloud map and a software drawing to form a model three-dimensional monomer set of three image layers of a overhead image layer, a point cloud image layer and a drawing layer;
s1-4, determining primitive elements for generating different spatial form data of the geographic entity, wherein the primitive elements comprise road sections and building roofs, the building bases, building walls, floors and windows are stripped from an overhead image map, the elevation information of the central point determined in S1-2 is obtained from a point cloud map layer, so that the roads and the building bases are determined as root primitives, the building roofs and the walls are determined as main primitives, the floors and the windows are determined as component primitives, two-dimensional primitives are formed together, and three-dimensional primitives formed by a model three-dimensional monomer and an inclined three-dimensional monomer are further included.
4. The method of claim 3, wherein the method of using drone to capture aerial images of ground plan view in S1-1 is as follows:
s1-1-1, selecting at least one specified area, and finding out a circumscribed rectangle of the at least one specified area;
s1-1-2, setting a flight route of the unmanned aerial vehicle and an exposure time point of an aerial photography device of the unmanned aerial vehicle based on the external rectangle;
s1-1-3, flying the unmanned aerial vehicle according to the flying route, and simultaneously carrying out image acquisition according to the exposure time point to obtain a plurality of ground overlook image maps.
5. The method according to claim 4, wherein the exposure time points are set in a manner that: setting the image acquisition range of the unmanned aerial vehicle on the flight route as a rectangular region R, after the current image is acquired, selecting the image acquisition range as a next image exposure time point when the unmanned aerial vehicle flies over the width distance of the R in the flight direction of the range R, turning the unmanned aerial vehicle, moving the R by a length to the left, flying the R by a reverse direction, and continuously acquiring the image, wherein the exposure time point is consistent with that of the forward flight, when the R lower boundary is overlapped with the R lower boundary, or the R lower boundary exceeds the R lower boundary, the unmanned aerial vehicle turns again, moves the R by a length to the right, continuously acquiring the image again in the forward direction, and the selection mode of the exposure time point is unchanged, and the whole area of the specified region can be acquired;
acquiring a ground plan image map by using satellite remote sensing shooting comprises acquiring three-dimensional primitives of an inclined three-dimensional monomer by using an oblique shooting technology.
6. The method according to claim 4 or 5, wherein the synchronized acquiring of the aerial LIDAR cloud points in S1-1 comprises starting a global scan of at least one defined area at the time of the first exposure of the drone, acquiring a global scan cloud point, or performing a global scan of the at least one defined area synchronized with the satellite remote sensing exposure time when shooting with satellite remote sensing.
7. The method of claim 3, wherein S1-2 specifically comprises
S1-2-1, calling the acquired ground overhead view image map, and realizing the setting of a plurality of road center points by adopting a node-labeled RNN recurrent neural network algorithm;
s1-2-2, establishing a city building network model by adopting an artificial intelligent network and acquiring a city building central point.
8. The method according to claim 7, wherein S1-2-1 specifically comprises:
s1-2-1-1, calling the acquired ground plane overhead image map, establishing a unified rectangular coordinate system E of the at least one specified area, generating road continuous nodes by using a node generator comprising an encoder and a decoder through an RNN (radio network) recurrent neural network algorithm, connecting the generated front and rear nodes in the generation process, inputting the new generated nodes into the node generator to continuously generate new nodes, continuing to connect the generated new nodes by straight line segments to form a road center line, and circulating in the same way to connect the road center line into a road network;
s1-2-1-2, widening all line segments in a road network according to a preset width W to form a road width line with a certain width, and thus obtaining an urban road network model, wherein W is widened according to the corresponding road width in the ground top view image map, and is 0.5-0.8 times of the actual road width value W corresponding to the road section where the road node in the ground top view image map is located;
s1-2-1-3, for each node in S1-2-1-1, selecting a corresponding node as a corresponding marked node in the widening range of w in S1-2-1-2 in a bypass mode, and defining the node as a road center point, wherein the bypass selection method specifically comprises the following steps: making a straight line perpendicular to the central line of any road on two sides of the corresponding road node and the boundary of the broadening w intersect at two intersection points, selecting any intersection point, selecting a preset distance from the selected intersection point on the straight line and locating in the broadening range within the broadening range, namely the preset distance is
Figure FDA0003632299540000021
Wherein S1-2-2 specifically comprises:
s1-2-2-1, based on the ground top view image map, extracting a series of feature maps obtained by different convolutional layers by using a VGG-16 algorithm without an additional layer as a CNN main network, wherein the feature maps are 1/2-1/10 of the size of an input image;
meanwhile, a characteristic pyramid is constructed by using different layers of a CNN main network through an image pyramid algorithm FPN, and the frames of a plurality of buildings are predicted,
s1-2-2-2, for each building in a plurality of buildings, obtaining a local feature map F of the building by using a RoIAlign algorithm on the feature maps obtained by the series of different convolutional layers and the corresponding frame of the building;
s1-2-2-3, forming a polygonal boundary cover M by adopting convolution layer processing on the local characteristic diagram F of each building, and forming P predicted vertexes of the boundary cover M by utilizing convolution layer processing;
s1-2-2-4, selecting a point with the maximum or minimum abscissa or ordinate in P predicted vertexes as a first calibration point, if the same maximum or minimum abscissa or ordinate exists, selecting the corresponding point with the maximum or minimum ordinate as the first calibration point, performing distance calculation between the first calibration point and the rest P-1 points according to the path sequence of clockwise or anticlockwise connection predicted points, connecting the first calibration point with the point corresponding to the longest distance of the first calibration point, correspondingly selecting another adjacent vertex with the shortest distance from the first calibration point based on the prediction of the boundary cover M as a second calibration point, and connecting the other adjacent vertex with the point corresponding to the longest distance of the second calibration point in the same way to obtain an intersection point between two connecting line segments as a primary central point of each building;
s1-2-2-5, taking a preliminary central point of each building as a circle center, taking a preset radius r as a circle, starting with a point on a circumference which is parallel to an X axis and is connected with the circle center, taking a preset angle as a stepping unit, searching a plurality of circumferential points on the circumference clockwise or anticlockwise, judging whether the plurality of circumferential points are all in the boundary cover M, if so, defining that the circular area completely covers the building, and not performing forced offset movement on the preliminary central point, wherein the preliminary central point is the building central point;
if not, expanding r according to a preset step pitch to expand the circle domain range, continuously searching a new group of multiple circumferential points on the circumference of the expanded circle by taking a preset angle as a stepping unit, and judging whether the new group of multiple circumferential points are all in the boundary cover M; if not, circularly expanding a circular domain and judging whether more new groups of a plurality of circumferential points are all in the boundary cover M, stopping circulation until one circumferential point is found in the boundary cover M, forcibly shifting the central point to the circumferential point for superposition, and taking the circumferential point after forced shifting as the building central point;
the method for judging the interior comprises the following steps: it is sufficient to determine whether or not the coordinates of the circumferential points belong to the coordinates inside the boundary cover M.
9. The method according to claim 8, wherein the preset radius r is 1-3m scaled to actual size according to a scale in the registered image, the preset angle is 1 second-1 degree, and the preset step distance is 1-9m.
10. The method according to claim 8 or 9, wherein the preset angles used for searching the plurality of circumferential points before and after the circle domain expansion and each circle domain expansion are the same or at least different.
11. The method according to claim 3, wherein S1-3 comprises in particular the steps of:
s1-3-1, selecting a ground overhead image, a global scanning point cloud picture and a software drawing to determine positioning points;
s1-3-2, splicing the collected multiple ground overhead view images according to the upper exposure time sequence of the flight route to obtain a spliced image, and sequentially and correspondingly superposing the aerial LIDAR point cloud images, the spliced image and the software drawing which are collected synchronously according to positioning points one by one;
s1-3-3, vertically translating and splicing the boundary cover M of each building in the image map to the elevation of each building top surface along the Z axis of an E coordinate system by the current central point according to the elevation information of the building top surfaces in the point cloud image so as to finish registration;
the method comprises the following steps that in S1-3-1, two first positioning points are respectively set on the ground top-view image map and the corresponding point cloud map, the center point of each building is given to the building in the corresponding software drawing, the center point of the building and one vertex of the roof of the building are used as the first positioning points of the software drawing, the coordinates of the first positioning point of each of the ground top-view image map and the corresponding point cloud map under E are the same as the coordinates of one first positioning point of the rest of the buildings under E, and in addition, the center point of each building in the spliced image map and one predicted point, corresponding to one vertex of the roof of the building, on a boundary cover M need to be set as second positioning points;
s1-3-1, when the unmanned aerial vehicle is used for aerial photography, one of the first positioning points in the image map is projected on the XOY plane of E with the position point where the unmanned aerial vehicle is located at the corresponding exposure time point, while the other one is selected to be projected on the XOY plane of E with one vertex of R,
in S1-3-2, deleting image parts exceeding a specified area before splicing the plurality of acquired image maps according to the upper exposure time sequence of the flight route;
the aerial LIDAR point cloud picture and the spliced image picture which are synchronously collected are sequentially coincided one by one according to each first positioning point, and the method specifically comprises the following steps:
overlapping two first positioning points in the corresponding point cloud images with two positioning points with the same coordinate in the ground top-view image, and overlapping the central point of the software drawing and one vertex of the building roof with a corresponding second positioning point in the spliced image;
the superposition operation is specifically to introduce a corresponding point cloud image and a corresponding software drawing into the spliced image map spliced in the geographic image software, perform at least one operation of translation, rotation and scaling on a first positioning point with the same coordinate under the established E to realize superposition, and perform at least one operation of translation, rotation and scaling on a central point of the software drawing and a vertex of the building roof to realize superposition with two second positioning points in the spliced image map respectively.
12. The method according to claim 3 or 11, wherein the outline boundaries of the road network model and the building boundary cover M established in S1-2-1 and S1-2-2 and the building base, building wall, floor and window in the software drawing in S1-4 are used as the basis for stripping of each primitive element, and are used for establishing a geographic entity primitive attribute storage field and a geographic entity basic attribute storage field inside each geographic entity within the boundary range respectively.
13. The method according to claim 12, wherein the specific attribute fields established in S2 comprise root map element specific attributes including road building type, road and building structure, building floor number, road and building purpose; the special attributes of the fifth vertical surface component graphic primitive comprise type, structure, top elevation and bottom high level;
in the S3, the important attribute characteristics of the geographic entity, including spatial position, entity type, non-spatial attribute, intrinsic or extrinsic relationship, and application attribute, may be analyzed from the geographic entity code, where the application attribute includes: identity marks, use marks, time marks, type marks and the like.
In S5, the geographic entity establishes association with the primitive elements through the association table, and when the geographic entity expresses the primitive data, the geometric data is retrieved according to the unique identification field information.
14. The method according to claim 8 or 9, wherein the establishing of the associated encryption table comprises:
s5-1 takes the road center point corresponding to the S1-2-1-3 road node as the center of a circle and takes the preset radius r i I is a road segmented code, a circle is made, a plurality of circumferential points are searched clockwise or anticlockwise on the circumference by taking a connecting line with the circle center as a point on the circumference parallel to the X axis and taking a preset angle as a stepping unit, the coordinates of the plurality of circumferential points are arranged according to the searching sequence or the first preset sequence, and a road password string is formed
Figure FDA0003632299540000031
S5-2 for each building preliminary center point formed by S1-2-2-5, taking a corresponding circle as a center when a circumferential point is found to be in the interior of the boundary cover M, and taking a corresponding radius R i I is a corresponding building base primitive code, starting from a point on a circumference which is parallel to an X axis and is connected with a circle center, clockwise or anticlockwise searching a plurality of circumferential points on the circumference by taking a preset angle as a stepping unit, and arranging the coordinates of the plurality of circumferential points according to the searching sequence or a second preset sequence to form a building password string
Figure FDA0003632299540000032
S5-3 road password string
Figure FDA0003632299540000033
And building cipher string
Figure FDA0003632299540000034
Respectively encrypting each road segment and building primitive elements to form an associated encryption table, so that when the associated table retrieves the primitive elements through geographic entity coding, the retrieved target primitives can be successfully associated only by carrying out password string decoding on the retrieved target primitives;
the form of the associated encryption table is a road password string
Figure FDA0003632299540000035
And building cipher string
Figure FDA0003632299540000036
And mapping and corresponding to the road segments and the building primitive elements respectively to form a data table.
CN202210493029.2A 2022-05-07 2022-05-07 Geographic entity geometric expression method based on semantic relation Active CN115438133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210493029.2A CN115438133B (en) 2022-05-07 2022-05-07 Geographic entity geometric expression method based on semantic relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210493029.2A CN115438133B (en) 2022-05-07 2022-05-07 Geographic entity geometric expression method based on semantic relation

Publications (2)

Publication Number Publication Date
CN115438133A true CN115438133A (en) 2022-12-06
CN115438133B CN115438133B (en) 2023-05-30

Family

ID=84241296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210493029.2A Active CN115438133B (en) 2022-05-07 2022-05-07 Geographic entity geometric expression method based on semantic relation

Country Status (1)

Country Link
CN (1) CN115438133B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310115A (en) * 2023-03-17 2023-06-23 合肥泰瑞数创科技有限公司 Method and system for constructing building three-dimensional model based on laser point cloud
CN116450765A (en) * 2023-06-16 2023-07-18 山东省国土测绘院 Polymorphic geographic entity coding consistency processing method and system
CN117290457A (en) * 2023-11-22 2023-12-26 湖南省第一测绘院 Multi-mode data management system for geographic entity, database and time sequence management method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130246328A1 (en) * 2010-06-22 2013-09-19 Peter Joseph Sweeney Methods and devices for customizing knowledge representation systems
CN105117494A (en) * 2015-09-23 2015-12-02 中国搜索信息科技股份有限公司 Spatial entity mapping method in fuzzy linguistic environment
CN105279243A (en) * 2015-09-28 2016-01-27 张新长 Spatial data conversion method and system
WO2017106863A1 (en) * 2015-12-18 2017-06-22 Drexel University Identifying and quantifying architectural debt and decoupling level; a metric for architectural maintenance complexity
CN108022273A (en) * 2016-10-28 2018-05-11 中国测绘科学研究院 A kind of figure number Detachable drafting method and system
CN109345450A (en) * 2018-08-20 2019-02-15 江苏省测绘工程院 A kind of image mosaic method using geographical entity element information
CN112579712A (en) * 2021-01-25 2021-03-30 武大吉奥信息技术有限公司 Method and equipment for constructing polymorphic geographic entity data model and storage equipment
CN113194015A (en) * 2021-04-29 2021-07-30 洪璐 Internet of things intelligent household equipment safety control method and system
WO2021197341A1 (en) * 2020-04-03 2021-10-07 速度时空信息科技股份有限公司 Monocular image-based method for updating road signs and markings
CN113868363A (en) * 2021-12-02 2021-12-31 北京山维科技股份有限公司 Geographic entity house primitive data processing method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130246328A1 (en) * 2010-06-22 2013-09-19 Peter Joseph Sweeney Methods and devices for customizing knowledge representation systems
CN105117494A (en) * 2015-09-23 2015-12-02 中国搜索信息科技股份有限公司 Spatial entity mapping method in fuzzy linguistic environment
CN105279243A (en) * 2015-09-28 2016-01-27 张新长 Spatial data conversion method and system
WO2017106863A1 (en) * 2015-12-18 2017-06-22 Drexel University Identifying and quantifying architectural debt and decoupling level; a metric for architectural maintenance complexity
CN108022273A (en) * 2016-10-28 2018-05-11 中国测绘科学研究院 A kind of figure number Detachable drafting method and system
CN109345450A (en) * 2018-08-20 2019-02-15 江苏省测绘工程院 A kind of image mosaic method using geographical entity element information
WO2021197341A1 (en) * 2020-04-03 2021-10-07 速度时空信息科技股份有限公司 Monocular image-based method for updating road signs and markings
CN112579712A (en) * 2021-01-25 2021-03-30 武大吉奥信息技术有限公司 Method and equipment for constructing polymorphic geographic entity data model and storage equipment
CN113194015A (en) * 2021-04-29 2021-07-30 洪璐 Internet of things intelligent household equipment safety control method and system
CN113868363A (en) * 2021-12-02 2021-12-31 北京山维科技股份有限公司 Geographic entity house primitive data processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚驰: "基于网格索引与几何特征的多尺度面状地理实体匹配方法研究" *
郭功举等: "地理实体数据库研究与实践" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310115A (en) * 2023-03-17 2023-06-23 合肥泰瑞数创科技有限公司 Method and system for constructing building three-dimensional model based on laser point cloud
CN116310115B (en) * 2023-03-17 2023-11-24 合肥泰瑞数创科技有限公司 Method and system for constructing building three-dimensional model based on laser point cloud
CN116450765A (en) * 2023-06-16 2023-07-18 山东省国土测绘院 Polymorphic geographic entity coding consistency processing method and system
CN116450765B (en) * 2023-06-16 2023-08-25 山东省国土测绘院 Polymorphic geographic entity coding consistency processing method and system
CN117290457A (en) * 2023-11-22 2023-12-26 湖南省第一测绘院 Multi-mode data management system for geographic entity, database and time sequence management method
CN117290457B (en) * 2023-11-22 2024-03-08 湖南省第一测绘院 Multi-mode data management system for geographic entity, database and time sequence management method

Also Published As

Publication number Publication date
CN115438133B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN115438133B (en) Geographic entity geometric expression method based on semantic relation
CN108759840B (en) Indoor and outdoor integrated three-dimensional navigation path planning method
CN110874391A (en) Data fusion and display method based on urban space three-dimensional grid model
JP7273927B2 (en) Image-based positioning method and system
CN110021072B (en) Holographic mapping-oriented multi-platform point cloud intelligent processing method
Lee et al. Automatic integration of facade textures into 3D building models with a projective geometry based line clustering
CN115272591B (en) Geographic entity polymorphic expression method based on three-dimensional semantic model
CN107170033A (en) Smart city 3D live-action map systems based on laser radar technique
Aljumaily et al. Big-data approach for three-dimensional building extraction from aerial laser scanning
Sugihara et al. Automatic generation of 3D building models from complicated building polygons
CN112712592B (en) Building three-dimensional model semantization method
Laycock et al. Automatically generating large urban environments based on the footprint data of buildings
Jin et al. An Indoor Location‐Based Positioning System Using Stereo Vision with the Drone Camera
CN115187647A (en) Vector-based road three-dimensional live-action structured modeling method
Jung et al. Development of an Omnidirectional‐Image‐Based Data Model through Extending the IndoorGML Concept to an Indoor Patrol Service
Dehbi et al. Robust and fast reconstruction of complex roofs with active sampling from 3D point clouds
Jiang et al. Low–high orthoimage pairs-based 3D reconstruction for elevation determination using drone
Borisov et al. An automated process of creating 3D city model for monitoring urban infrastructures
CN117830521A (en) Virtual park construction method and management method based on digital twin
Wu et al. [Retracted] Intelligent City 3D Modeling Model Based on Multisource Data Point Cloud Algorithm
Wu Photogrammetry for 3D mapping in Urban Areas
CN112948518B (en) Object processing method, device, electronic equipment and computer storage medium
Kim et al. Automatic 3D city modeling using a digital map and panoramic images from a mobile mapping system
Gruen et al. An Operable System for LoD3 Model Generation Using Multi-Source Data and User-Friendly Interactive Editing
Wang et al. A 3-D city model data structure and its implementation in a relational database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant