CN114219819A - A method of singulation of oblique photographic model based on orthophoto boundary detection - Google Patents

A method of singulation of oblique photographic model based on orthophoto boundary detection Download PDF

Info

Publication number
CN114219819A
CN114219819A CN202111373225.8A CN202111373225A CN114219819A CN 114219819 A CN114219819 A CN 114219819A CN 202111373225 A CN202111373225 A CN 202111373225A CN 114219819 A CN114219819 A CN 114219819A
Authority
CN
China
Prior art keywords
building
model
boundary
orthophoto
outline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111373225.8A
Other languages
Chinese (zh)
Inventor
辛佩康
高丙博
吴友
余芳强
张铭
谷志旺
刘寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Construction No 4 Group Co Ltd
Original Assignee
Shanghai Construction No 4 Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Construction No 4 Group Co Ltd filed Critical Shanghai Construction No 4 Group Co Ltd
Priority to CN202111373225.8A priority Critical patent/CN114219819A/en
Publication of CN114219819A publication Critical patent/CN114219819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention fully utilizes the advantages of the orthographic image and the real-scene model in the same geographic coordinate system, applies the oblique photography model and the depth learning method, realizes the automatic extraction of the building outline based on the orthographic image boundary detection, further utilizes the coordinate attribute of the orthographic image to obtain the specific geographic coordinate position after the building is integrated, realizes the extraction of the integrated information of the oblique photography model, improves the efficiency of the integrated automatic extraction of the building, and simultaneously provides data support for the later-stage integrated management. The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.

Description

Oblique photography model unitization method based on orthoscopic image boundary detection
Technical Field
The invention relates to a method for unitizing an oblique photography model based on orthoscopic image boundary detection.
Background
In recent years, with the popularization of civil unmanned aerial vehicles and the rapid development of oblique photography technologies, the actual situation of a building ground object is truly reflected by carrying a multi-lens sensor on the same flight platform and simultaneously carrying out oblique photography from multiple angles in the vertical direction and the oblique direction, and the rapid generation of a ground object real-scene three-dimensional model becomes an important means for acquiring three-dimensional space information data. And the extraction of the three-dimensional space information of the building has important significance for planning and managing cities and villages. However, the three-dimensional live-action model obtained based on the oblique photography technique lacks structured semantic information, and cannot select and separate a single building. In practical applications, such a model only remains at the level of model browsing, geometric measurement, and the like, and operations such as selection, indexing, attribute information addition, and individualized management cannot be performed on the ground feature individual, so that it is necessary to perform individualized processing on the oblique photography live-action model.
At present, the common methods of monomerization are mainly cutting monomerization, reconstruction monomerization, ID monomerization and dynamic monomerization. The dynamic singleization can directly utilize two-dimensional vector data, updating and classifying cost is low, a sawtooth edge does not exist during rendering, LOD indexes are unchanged, and different data application requirements can be met. At the present stage, when the operation of singleization is carried out, it is inevitable to carry out manual delineation to building outline boundary and obtain geographical position information through software operation, and this has caused the singleization process to have certain demand in manpower and time energy, and when carrying out manual delineation to the building simultaneously, the boundary standard that everyone defined is not unique, and this also has certain difficulty to later stage planning and management.
Therefore, it is a problem how to uniformly and rapidly complete the monomer-forming operation.
Disclosure of Invention
The invention aims to provide a method for unitizing a tilted photography model based on orthoscopic image boundary detection.
In order to solve the above problems, the present invention provides a method for unitizing an oblique photography model based on an orthoimage boundary detection, comprising:
step S1, acquiring images from multiple angles through an aircraft platform carrying a multi-lens sensor, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion;
step S2, constructing a neural network model, training the neural network model, and carrying out boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary;
step S3, regularizing the building orthographic projection contour boundary to obtain a building orthographic projection contour boundary after the orthographic image regularization, performing real-scene model coordinate transformation on the building orthographic projection contour boundary after the orthographic image regularization to obtain a geographical coordinate value of each corner point of the building outline, and generating a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline;
and step S4, building a building bounding box model based on the real scene model building boundary plane vector diagram, superposing the bounding box model on the oblique photography three-dimensional model to obtain a superposed three-dimensional model, rendering triangular surface sheets in the superposed three-dimensional model and superposing specified colors, thereby realizing automatic singleization of the oblique photography real scene model.
Further, in the above method, in step S1, acquiring images from multiple angles through an aircraft platform carrying multiple lens sensors, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion, the method includes:
step S11, carrying out multi-azimuth and multi-angle aerial photography on a target area from the air by using an aircraft platform carrying a multi-lens sensor to obtain a sequence image with a preset overlapping degree;
step S12, reconstructing and generating an oblique photography three-dimensional model according to the sequence images by using live-action modeling software, and deriving aerial image dense matching point clouds;
and step S13, generating an orthoimage without projection distortion according to the sequence image by using the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with the geographic coordinate system of the oblique photography three-dimensional model.
Further, in the above method, in step S2, constructing a neural network model and training the neural network model, includes:
step S21, building a deep learning framework and building a neural network model for detecting the building boundary image;
step S22, establishing a training set, a verification set and a test set;
step S23, inputting a training set and a verification set into the neural network model for building boundary image detection, setting the training environment, the training times, the training threshold value and the training step pitch of the neural network model for building boundary image detection, executing model training, and reserving model parameters after the training is finished to obtain an initial boundary detection training model;
step S24, testing the initial boundary detection training model by using the test set, finishing model training and testing if the accuracy of the test result is more than or equal to 97%, and taking the initial boundary detection training model as a trained neural network model; if the accuracy of the detection result is less than 97%, performing iterative optimization on the parameters of the initial boundary detection training model in a data set expansion, data enhancement and over-parameter adjustment mode until the accuracy of the detection result is greater than or equal to 97%, and obtaining a trained neural network model.
Further, in the method, in step S22, establishing a training set and a verification set includes:
step S221, selecting a data set similar to the building style of a research area;
step S222, selecting a part of the ortho-images, and carrying out building outline marking on the selected part of the ortho-images by using an image marking tool to obtain marked ortho-images;
and step S223, randomly disordering the data sets similar to the architectural style of the research area and the marked orthographic images to fuse and establish a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a preset proportion.
Further, in the above method, in step S2, the method of performing boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary includes:
step S25, determining a cutting size based on the trained neural network model and receiving the image size, and performing uniform image cutting on the ortho image based on the cutting size to obtain an ortho image prediction set;
and step S26, detecting all the orthoimage prediction sets by using the trained neural network model to obtain building block binary images of all the orthoimage prediction sets.
Further, in the above method, the step S3 of regularizing the building orthographic projection contour boundary to obtain a regularized building orthographic projection contour boundary of an orthographic image includes:
step S301, based on an arbitrary polygon seed filling method, hole filling is carried out on a building block binary image of the orthophoto image prediction image set to obtain a filled building block binary image;
step S302, building blocks with local connection are segmented in the filled building block binary image by using a watershed algorithm to obtain a segmented binary image;
step S303, utilizing a corrosion algorithm to expand gaps among the building blocks in the divided binary image so as to obtain an optimized building block binary image;
step S304, extracting the building outline of the building block binary image after the step optimization based on a binary image outline extraction algorithm;
step S305, performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons;
step S306, acquiring the length and azimuth angle of each edge in the building outline boundary polygon;
step S307, comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
step S308, rotating the building outline boundary around a central point to a position vertical or parallel to the main direction to obtain the rotated building outline boundary;
step S309, correcting adjacent edges of the rotated building outline boundary, and taking an intersection point when the adjacent edges are vertical; when the adjacent edges are parallel, based on the distance threshold of the adjacent edges, the short edge is translated to the long edge or a straight line is added to the adjacent edges, and finally the regular building orthographic projection outline boundary of the orthographic image is generated.
Further, in the above method, in step S3, the performing real-world model coordinate transformation on the boundary of the orthographic projection contour of the building after the orthographic image is regularized to obtain a geographic coordinate value of each corner point of the building contour, including:
step S321, extracting an affine matrix of the orthoimage based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, pixel coordinates and a conversion scaling ratio alpha of actual geographic coordinates;
step S322, constructing the corner pixel coordinates (X, y) and the corresponding geographic coordinates (X) of the regular building orthographic projection contour boundary of any orthographic image in the orthographic imagesCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
step S322, converting the pixel coordinates of each corner point of the regular building orthographic projection contour boundary of each orthographic image into corresponding geographic coordinates according to the conversion function.
Further, in the above method, in step S3, generating a real-world model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline, including:
and step S331, generating a real-scene model building boundary plane vector diagram based on the geographic coordinate value of each corner point of the building outline, wherein the real-scene model building boundary plane vector diagram and the oblique photography three-dimensional model are in the same geographic coordinate system.
Further, in the above method, in step S4, the building bounding box model is built based on the real-world model building boundary plane vector diagram, including:
step S401, obtaining three-dimensional geographic coordinate information of an aerial triangular connection point according to the aerial triangulation result;
s402, judging the inclusion relation between the aerial triangle connection points and the building outline based on the ray method principle according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinate of the aerial triangle connection points, and screening out the aerial triangle connection points in each building outline;
step S403, comparing elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
step S405, using the vector geometric polygon in the real scene model building boundary plane vector diagram as the lower bottom surface of the bounding box model of the building single body, and using the height of the bounding box of the building single body as the height of the bounding box model, and creating the building single body bounding box polyhedral model.
Further, in the above method, superimposing the bounding box model on the oblique photography three-dimensional model to obtain a superimposed three-dimensional model, rendering a triangular surface in the superimposed three-dimensional model and superimposing a specified color, thereby implementing automatic singleization of the oblique photography live-action model, including:
and superposing the building monomer bounding box model to the lowest elevation position in the building outline of the oblique photography three-dimensional model to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
Compared with the prior art, the method fully utilizes the advantages that the ortho-image and the real scene model have the same geographic coordinate system, applies the oblique photography model and the deep learning method, realizes the automatic extraction of the building outline based on the ortho-image boundary detection, further obtains the concrete geographic coordinate position after the building is singulated by utilizing the coordinate attribute of the ortho-image, realizes the singulation information extraction of the oblique photography model, improves the singulation automatic extraction efficiency of the building, and simultaneously provides data support for later-stage singulation management.
The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.
Drawings
FIG. 1 is a flowchart of a method for unitizing an orthophoto image based on edge detection of an orthophoto image;
FIG. 2 is a block binary image of an orthophoto building according to an embodiment of the present invention;
FIG. 3 is an orthographic image of a boundary of a building outline according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the present invention provides a method for unitizing an oblique photography model based on orthoimage boundary detection, comprising:
step S1, acquiring images from multiple angles through an aircraft platform carrying a multi-lens sensor, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion;
step S2, constructing a neural network model, training the neural network model, and carrying out boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary;
step S3, regularizing the building orthographic projection contour boundary to obtain a building orthographic projection contour boundary after the orthographic image regularization, performing real-scene model coordinate transformation on the building orthographic projection contour boundary after the orthographic image regularization to obtain a geographical coordinate value of each corner point of the building outline, and generating a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline;
and step S4, building a building bounding box model based on the real scene model building boundary plane vector diagram, superposing the bounding box model on the oblique photography three-dimensional model to obtain a superposed three-dimensional model, rendering triangular surface sheets in the superposed three-dimensional model and superposing specified colors, thereby realizing automatic singleization of the oblique photography real scene model.
The orthoimage has the characteristics of rich image information, intuition, reality, no projection distortion and the like, has good interpretation and measurement performance, and can determine the geographical position information of the building through the coordinate position relationship between the orthoimage and the real-scene model.
The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.
The invention fully utilizes the advantages of the orthoimage and the real-scene model in the same geographic coordinate system, applies the oblique photography model and the deep learning method, realizes the automatic extraction of the building outline based on the orthoimage boundary detection, further utilizes the coordinate attribute of the orthoimage to obtain the concrete geographic coordinate position after the building is integrated, realizes the extraction of the single information of the oblique photography model, improves the efficiency of the single automatic extraction of the building, and simultaneously provides data support for the later-stage single management.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, step S1, acquiring an image from multiple angles through an aircraft platform carrying multiple lens sensors, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion, includes:
step S11, carrying out multi-azimuth and multi-angle aerial photography on a target area from the air by using an aircraft platform carrying a multi-lens sensor to obtain a sequence image with a preset overlapping degree;
step S12, reconstructing and generating an oblique photography three-dimensional model according to the sequence images by using live-action modeling software, and deriving aerial image dense matching point clouds;
and step S13, generating an orthoimage without projection distortion according to the sequence image by using the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with the geographic coordinate system of the oblique photography three-dimensional model.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection, in step S2, the constructing and training of the neural network model includes:
step S21, building a deep learning framework and building a neural network model for detecting the building boundary image;
step S22, establishing a training set, a verification set and a test set;
step S23, inputting a training set and a verification set into the neural network model for building boundary image detection, setting the training environment, the training times, the training threshold value and the training step pitch of the neural network model for building boundary image detection, executing model training, and reserving model parameters after the training is finished to obtain an initial boundary detection training model;
step S24, testing the initial boundary detection training model by using the test set, finishing model training and testing if the accuracy of the test result is more than or equal to 97%, and taking the initial boundary detection training model as a trained neural network model; if the accuracy of the detection result is less than 97%, performing iterative optimization on the parameters of the initial boundary detection training model in a data set expansion, data enhancement and over-parameter adjustment mode until the accuracy of the detection result is greater than or equal to 97%, and obtaining a trained neural network model.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, in step S22, establishing a training set and a verification set includes:
step S221, selecting a data set similar to the building style of a research area;
step S222, selecting a part of the ortho-images, and carrying out building outline marking on the selected part of the ortho-images by using an image marking tool to obtain marked ortho-images;
and step S223, randomly disordering the data sets similar to the architectural style of the research area and the marked orthographic images to fuse and establish a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a preset proportion.
In an embodiment of the oblique photography model singleness method based on orthoimage boundary detection of the present invention, in step S2, performing boundary detection on an orthoimage by using a trained neural network model to obtain a building orthoimage projection contour boundary, the method includes:
step S25, determining a cutting size based on the trained neural network model and receiving the image size, and performing uniform image cutting on the ortho image based on the cutting size to obtain an ortho image prediction set;
and step S26, detecting all the orthoimage prediction sets by using the trained neural network model to obtain building block binary images of all the orthoimage prediction sets.
In an embodiment of the oblique photography model unitization method based on the ortho-image boundary detection of the present invention, the step S3 of regularizing the building ortho-projection contour boundary to obtain the regular building ortho-projection contour boundary of the ortho-image includes:
step S301, based on an arbitrary polygon seed filling method, hole filling is carried out on a building block binary image of the orthophoto image prediction image set to obtain a filled building block binary image;
step S302, building blocks with local connection are segmented in the filled building block binary image by using a watershed algorithm to obtain a segmented binary image;
step S303, utilizing a corrosion algorithm to expand gaps among the building blocks in the divided binary image so as to obtain an optimized building block binary image;
step S304, extracting the building outline of the building block binary image after the step optimization based on a binary image outline extraction algorithm;
step S305, performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons;
step S306, acquiring the length and azimuth angle of each edge in the building outline boundary polygon;
step S307, comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
step S308, rotating the building outline boundary around a central point to a position vertical or parallel to the main direction to obtain the rotated building outline boundary;
step S309, correcting adjacent edges of the rotated building outline boundary, and taking an intersection point when the adjacent edges are vertical; when the adjacent edges are parallel, based on the distance threshold of the adjacent edges, the short edge is translated to the long edge or a straight line is added to the adjacent edges, and finally the regular building orthographic projection outline boundary of the orthographic image is generated.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, step S3, performing real-world model coordinate transformation on the boundary of the orthoimage-regularized architectural orthographic projection contour to obtain a geographic coordinate value of each corner point of the architectural contour includes:
step S321, extracting an affine matrix of the orthoimage based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, pixel coordinates and a conversion scaling ratio alpha of actual geographic coordinates (the ratio has positive and negative);
step S322, constructing the corner pixel coordinates (X, y) and the corresponding geographic coordinates (X) of the regular building orthographic projection contour boundary of any orthographic image in the orthographic imagesCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
step S322, converting the pixel coordinates of each corner point of the regular building orthographic projection contour boundary of each orthographic image into corresponding geographic coordinates according to the conversion function.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, step S3 is to generate a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline, including:
and step S331, generating a real-scene model building boundary plane vector diagram based on the geographic coordinate value of each corner point of the building outline, wherein the real-scene model building boundary plane vector diagram and the oblique photography three-dimensional model are in the same geographic coordinate system.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, step S4, the building bounding box model is established based on the real-world model building boundary plane vector diagram, which includes:
step S401, obtaining three-dimensional geographic coordinate information of an aerial triangular connection point according to the aerial triangulation result;
s402, judging the inclusion relation between the aerial triangle connection points and the building outline based on the ray method principle according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinate of the aerial triangle connection points, and screening out the aerial triangle connection points in each building outline;
step S403, comparing elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
step S405, using the vector geometric polygon in the real scene model building boundary plane vector diagram as the lower bottom surface of the bounding box model of the building single body, and using the height of the bounding box of the building single body as the height of the bounding box model, and creating the building single body bounding box polyhedral model.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, in step S4, the bounding box model is superimposed on the oblique photography three-dimensional model to obtain a superimposed three-dimensional model, and a triangular surface in the superimposed three-dimensional model is rendered and a specified color is superimposed, so as to realize automatic unitization of the oblique photography real scene model, including:
and superposing the building monomer bounding box model to the lowest elevation position in the building outline of the oblique photography three-dimensional model to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
Specifically, the oblique photography model unitization method based on orthoimage boundary detection mainly comprises the following steps:
1. three-dimensional live-action model (oblique photography three-dimensional model), and orthoimage acquisition
1.1 aerial data acquisition
Firstly, an aircraft platform carrying a multi-lens sensor is utilized to carry out multi-azimuth and multi-angle aerial photography on a target area from the air to obtain a sequence image with a preset overlapping degree.
For example, the places such as Jian, Guangdong Buddha and Gengjiang of China can be selected as aerial photography target areas, a professional unmanned aerial vehicle-longitude and latitude M300RTK in Xinjiang is used for determining the flight line of the unmanned aerial vehicle, the flight line can be ensured to shoot the target areas comprehensively, then the unmanned aerial vehicle is used for carrying a multi-lens sensor to carry out aerial photography on the target areas from multiple angles and multiple directions during cruising, and sequence images with certain overlapping degree are obtained.
1.2 live-action model reconstruction
And then, reconstructing a large number of sequence images obtained in the step 1.1 by using live-action modeling software to generate an oblique photography three-dimensional model, and deriving an aerial triangulation result (dense image matching point cloud).
For example, a large number of sequence images obtained in step 1.1 may be imported into ContextCapture software, and then three-dimensional live-action modeling is performed through processes of feature point extraction, multi-view image matching, adjustment of a beam method local area network, and the like, so as to generate an oblique photography three-dimensional model of locations such as jiangxi gean, guangdong foshan, fujian jin river, and the like, and derive aerial triangulation results (image dense matching point clouds).
1.3 ortho image acquisition
And continuously generating an orthoimage without projection distortion through the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with that of the oblique photography three-dimensional model.
For example, a projective distortion-free true ortho image may be generated by live-action modeling software, the geographic coordinate system of which is consistent with the oblique photography three-dimensional model, and which is the WGS-84 coordinate system.
2. Boundary detection deep learning model training
2.1 neural network model construction
Firstly, a deep learning framework is built, and a neural network model for detecting the building boundary image is built.
For example, a deep learning framework for building contour extraction can be constructed first, and a full convolution neural network U-net model is adopted for feature extraction.
2.2 building training data sets
2.2.1 collecting an image-based building outline detection data set disclosed in the industry, and selecting a data set similar to the building style of a research area according to data attributes;
for example, an image-based building contour detection data set disclosed in the industry can be collected, and a data set similar to the building style of research areas such as Jiangxi Jian, Guangdong Buddha, Fujian Jinjiang and the like is selected according to data attributes;
2.2.2 selecting a part of the ortho images, and carrying out building outline marking on the selected part of the ortho images by using an image marking tool to obtain marked ortho images;
for example, a part of the orthoimage can be selected, and the building outline can be labeled on the orthoimage by using an image labeling tool labelme;
2.2.3 randomly disordering the data in the step 2.2.1 and the step 2.2.2, fusing and establishing a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a proper proportion.
For example, the data in step 2.2.1 and step 2.2.2 may be randomly scrambled, and model training data sets are fused and established, and 2500 data sets are calculated, and according to the training set: and (4) verification set: the ratio of test sets was 8: 1: 1 partitioning a model training data set.
2.3 model training
Inputting the training set and the verification set obtained in the step 2.2.3 into the building boundary image detection neural network model established in the step 2.1, setting the training environment, the training times, the training threshold and the training step distance of the building boundary image detection neural network model, executing model training, and keeping model parameters after the training is finished so as to obtain an initial boundary detection training model.
For example, 2000 training sets and 250 verification sets obtained in step 2.2.3 may be input into the full convolution neural network U-net model constructed in step 2.1, a training environment python3.6, a pytorch1.4, a cuda10.0, a learning rate Lr of 0.0001, a batch size of 4, an iteration number epochs of 100 are set, and U-net model training is performed, and after the training is completed, model parameters are retained to obtain an initial boundary detection training model.
2.4 model testing
Testing the initial boundary detection training model obtained in the step 2.3 by using the test set obtained in the step 2.2.3, and if the accuracy of the detection result is more than or equal to 97 percent and the actual application requirement is met, completing model training and testing to obtain a trained neural network model; if the accuracy of the detection result is less than 97%, the actual application requirement is not met, iterative optimization is carried out through modes of data set expansion, data enhancement, super-parameter adjustment and the like until the accuracy of the detection result is more than or equal to 97%, the actual application requirement is met, and a well-trained neural network model is obtained.
For example, the initial boundary detection training model obtained in step 2.3 may be tested by using the 250 test sets obtained in step 2.2.3, and if the accuracy of the detection result is greater than or equal to 97%, the actual application requirement is met, the model training and testing are completed; if the accuracy of the detection result is less than 97%, the actual application requirement is not met, and iterative optimization is performed through modes of data set expansion, data enhancement, network parameter adjustment and the like until the actual application requirement is met.
3. Ortho image building contour extraction
3.1 orthographic image segmentation
And (4) performing uniform image segmentation on the ortho image obtained in the step 1.3 (the cutting size is determined according to the image size received by the neural network model trained in the step 2.4) to obtain an ortho image prediction set.
For example, the real projective image obtained in step 1.3 may be subjected to uniform image segmentation (with a segmentation size of 1024 × 1024) to obtain a prediction set of the real projective image.
3.2 building Block detection
And (3) detecting all the orthoimage prediction sets obtained in the step (3.1) by using the trained neural network model obtained in the step (2.4) to obtain building block binary images of all the orthoimage prediction sets.
For example, all the true ortho image prediction sets in step 3.1 may be detected by using the full convolutional neural network U-net model trained in step 2.4, so as to obtain the building block binary maps of all the ortho image maps. The actual effect of the building block binary map is shown in figure 2 of the accompanying drawings.
3.3 building outline regularization
3.3.1 building Block binary map optimization
(1) Firstly, hole filling is carried out on the building block binary image obtained in the step 3.2 based on an arbitrary polygon seed filling method, so as to obtain a filled building block binary image;
for example, a fillPoly hole filling function may be first constructed based on an arbitrary polygon seed filling method, and hole filling may be performed on the building block binary map obtained in step 3.2;
(2) then, by utilizing a watershed algorithm, building blocks with local connection are segmented in the filled building block binary image to obtain a segmented binary image;
for example, a watershed algorithm can then be used to segment building blocks where local connections exist;
(3) and continuously utilizing a corrosion algorithm to expand gaps among the building blocks in the segmentation binary image so as to obtain an optimized building block binary image.
For example, erosion algorithms may continue to be used to enlarge gaps between building blocks.
3.3.2 building Profile extraction
(1) Extracting each building contour from the building block binary image optimized in the step 3.3.1 based on a binary image contour extraction algorithm;
for example, a findContours contour extraction function can be constructed based on the principle of a binary image contour extraction algorithm, an area threshold value in the contour extraction function is set to be 500, an aspect ratio threshold value is set to be (0.1, 10), and the building block binary image optimized in the step 3.3.1 is extracted to obtain each building contour;
(2) and performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons.
For example, an approxplolydp contour fitting function can be constructed by using a polygon fitting curve method to perform approximate fitting processing on the contour boundary.
3.3.3 building outline boundary regularization
(1) Acquiring the length and azimuth angle of each edge in the building outline boundary polygon obtained in the step 3.3.2;
for example, the length and azimuth of each side of the polygon of the boundary of the building outline obtained in step 3.3.2 can be obtained;
(2) comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
for example, the lengths of each side of the polygon of the building outline boundary may be compared, and the longest side may be selected as the principal direction;
(3) rotating the building outline boundary around a central point to a position vertical or parallel to the main direction;
for example, the building outline boundary may be rotated around a center point to a position perpendicular or parallel to the main direction;
(4) and (3) correcting adjacent edges of the building contour boundary, taking an intersection point when the adjacent edges are vertical, translating the short edge to the long edge or adding a straight line on the adjacent edges based on the distance threshold of the adjacent edges when the adjacent edges are parallel, and finally generating the regular building orthographic projection contour boundary of the orthographic image.
For example, adjacent edges of the building contour boundary may be corrected, an intersection point is taken when the adjacent edges are perpendicular, when the adjacent edges are parallel, based on a distance threshold of the adjacent edges, a short edge is translated to a long edge or a straight line is added to the adjacent edges, and finally, the regular building contour boundary of the orthoimage is generated, as shown in fig. 3 in the drawing;
4. building outline geographic coordinate transformation
4.1 extracting an affine matrix of an orthographic image (tiff file) based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, and a conversion scaling ratio alpha (the ratio has positive and negative) of pixel coordinates and actual geographic coordinates;
4.2 construct the pixel coordinates (X, y) of the corner point of the boundary of the regular building orthographic projection contour of any orthographic image in the orthographic images and the geographic coordinates (X) thereofCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
and 4.3, converting the pixel coordinates of each corner point of the regular building orthographic projection outline boundary of each orthographic image into corresponding geographic coordinates according to a formula in the step 4.2 so as to obtain the geographic coordinate values of each corner point of the building outline.
5. Real scene model building outline vector diagram generation
And (4) generating a real scene model building boundary plane vector diagram (a building contour two-dimensional plane vector diagram) according to the geographical coordinate value of each corner point of the building contour obtained in the step (4), wherein the real scene model building boundary plane vector diagram and the oblique photography three-dimensional model obtained in the step (1.2) are in the same geographical coordinate system.
6. Building single bounding box for creating live-action model
6.1 building cell bounding Box height determination
6.1.1, obtaining three-dimensional geographic coordinate information of the aerial triangular connection points (aerial three connection points) according to the aerial triangular measurement result in the step 1.2;
6.1.2 according to the geographical coordinate value of each corner point of the building outline obtained in the step 4 and the three-dimensional geographical coordinate of the aerial triangle connecting point obtained in the step 1.2, judging the inclusion relationship between the aerial triangle connecting point and the building outline based on a ray method principle, and screening out the aerial triangle connecting point positioned in each building outline;
6.1.3 comparing the elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
6.1.4 then subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
6.2 creating building monomer enclosures
Taking the vector geometric polygon in the real-scene model building boundary plane vector diagram obtained in the step 5 as the lower bottom surface of the bounding box model of the building single body, and taking the height of the bounding box of the building single body obtained in the step 6.1.4 as the height of the bounding box model, and creating a building single body bounding box polyhedral model;
for example, according to the two-dimensional plane vector diagram of the real-scene model building outline obtained in step 5, the vector geometric polygon in the two-dimensional plane vector diagram is used as the lower bottom surface in the bounding box model, the vector geometric polygon of the lower bottom surface is pulled up to the height of the building monomer bounding box determined in step 6.1, then the adjacent points of the vector geometric polygon and the pulled points are sequentially connected to be used as the side surface of the bounding box model, and the building monomer bounding box polyhedral model which is pulled up according to the height of the building and is based on the two-dimensional plane vector geometric polygon is realized.
7. Dynamic singulation display
And (3) superposing the building single bounding box model obtained in the step (6.2) to the lowest elevation position in the building outline of the oblique photography three-dimensional model obtained in the step (1.2) to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1.一种基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,包括:1. an oblique photography model singulation method based on orthophoto boundary detection, is characterized in that, comprises: 步骤S1,通过搭载多镜头传感器的飞行器平台,从多角度采集影像,获取倾斜摄影三维模型和无投影畸变的正射影像;In step S1, an aircraft platform equipped with a multi-lens sensor is used to collect images from multiple angles to obtain a three-dimensional model of oblique photography and an orthophoto without projection distortion; 步骤S2,构建神经网络模型并进行神经网络模型的训练,利用训练好的神经网络模型对正射影像进行边界检测,得到建筑正射投影轮廓边界;Step S2, constructing a neural network model and training the neural network model, and using the trained neural network model to perform boundary detection on the orthophoto image to obtain the building orthoprojection outline boundary; 步骤S3,将所述建筑正射投影轮廓边界进行规则化,以得到正射影像规则化后的建筑正射投影轮廓边界,对正射影像规则化后的建筑正射投影轮廓边界进行实景模型坐标转换,以得到建筑轮廓每一角点的地理坐标值,基于建筑轮廓每一角点的地理坐标值,生成实景模型建筑边界平面矢量图;Step S3, regularize the building orthographic projection outline boundary to obtain the building orthographic projection outline boundary after the orthophoto regularization, and carry out the real scene model coordinates on the building orthographic projection outline boundary after the orthophoto regularization. Conversion to obtain the geographic coordinate value of each corner point of the building outline, and based on the geographic coordinate value of each corner point of the building outline, generate a realistic model building boundary plane vector map; 步骤S4,基于实景模型建筑边界平面矢量图建立建筑包围盒模型,将所述包围盒模型叠加至所述倾斜摄影三维模型上,以得到叠加三维模型,渲染所述叠加三维模型中的三角面片并叠加指定颜色,从而实现倾斜摄影实景模型的自动单体化。Step S4, building a building bounding box model based on the real-life model building boundary plane vector illustration, superimposing the bounding box model on the oblique photography 3D model to obtain a superimposed 3D model, and rendering the triangular patches in the superimposed 3D model And superimpose the specified color, so as to realize the automatic singulation of the oblique photography reality model. 2.如权利要求1所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S1,通过搭载多镜头传感器的飞行器平台,从多角度采集影像,获取倾斜摄影三维模型和无投影畸变的正射影像,包括:2. The method for singulating an oblique photography model based on orthophoto boundary detection as claimed in claim 1, wherein in step S1, an aircraft platform equipped with a multi-lens sensor is used to collect images from multiple angles to obtain a three-dimensional oblique photography. Models and orthophotos without projection distortion, including: 步骤S11,利用搭载多镜头传感器的飞行器平台,从空中对目标区域进行多方位和多角度航拍获取具有预设重叠度的序列影像;Step S11, using an aircraft platform equipped with a multi-lens sensor to perform multi-azimuth and multi-angle aerial photography of the target area from the air to obtain a sequence image with a preset degree of overlap; 步骤S12,利用实景建模软件,并根据序列影像,重建生成倾斜摄影三维模型,一并导出空中影像密集匹配点云;Step S12, using the real scene modeling software and according to the sequence images, reconstructing and generating the oblique photography 3D model, and exporting the densely matched point cloud of the aerial images together; 步骤S13,利用实景建模软件,并根据序列影像,生成无投影畸变的正射影像,其中,所述正射影像地理坐标系与倾斜摄影三维模型的地理坐标系保持一致。Step S13 , using the real scene modeling software and according to the sequence of images to generate an orthophoto without projection distortion, wherein the geographic coordinate system of the orthophoto is consistent with the geographic coordinate system of the oblique photography 3D model. 3.如权利要求1所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S2中,构建神经网络模型并进行神经网络模型的训练,包括:3. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 1, characterized in that, in step S2, constructing a neural network model and carrying out the training of the neural network model, comprising: 步骤S21,搭建深度学习框架,构建建筑边界图像检测的神经网络模型;Step S21, building a deep learning framework, and building a neural network model for building boundary image detection; 步骤S22,建立训练集、验证集和测试集;Step S22, establish a training set, a verification set and a test set; 步骤S23,将训练集与验证集输入所述建筑边界图像检测的神经网络模型,设置所述构建建筑边界图像检测的神经网络模型的训练环境、训练次数、训练阈值和训练步距,并执行模型训练,训练完成后保留模型参数,以得到初始边界检测训练模型;Step S23, input the neural network model of described building boundary image detection with training set and verification set, set the training environment, training times, training threshold and training step distance of the described building boundary image detection neural network model, and execute the model Training, after the training is completed, the model parameters are retained to obtain the initial boundary detection training model; 步骤S24,利用所述测试集对初始边界检测训练模型进行测试,若测试结果的正确率大于等于97%,则完成模型训练和测试,将初始边界检测训练模型作为训练好的神经网络模型;若检测结果的正确率小于97%,则通过数据集拓展、数据增强、调整超参数的方式对初始边界检测训练模型的参数进行迭代优化,直至检测结果的正确率大于等于97%,得到训练好的神经网络模型。Step S24, using the test set to test the initial boundary detection training model, if the correct rate of the test result is greater than or equal to 97%, then complete the model training and testing, and use the initial boundary detection training model as the trained neural network model; If the correct rate of the detection result is less than 97%, the parameters of the initial boundary detection training model are iteratively optimized by means of data set expansion, data enhancement, and hyperparameter adjustment until the correct rate of the detection result is greater than or equal to 97%, and the trained model is obtained. Neural network model. 4.如权利要求3所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S22,建立训练集与验证集,包括:4. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 3, wherein step S22, establishing a training set and a verification set, comprising: 步骤S221,选取与研究区域建筑风格类似的数据集;Step S221, select a data set similar to the architectural style of the research area; 步骤S222,选取所述正射影像中的部分正射影像,并利用图像标注工具对选取的部分正射影像进行建筑轮廓标注,得到标注后的正射影像;Step S222, select a part of the orthophoto in the orthophoto, and use an image labeling tool to mark the building outline on the selected part of the orthophoto, to obtain the labeled orthophoto; 步骤S223,将所述与研究区域建筑风格类似的数据集和所述标注后的正射影像随机打乱次序,以融合建立模型训练数据集,并按照预设比例将模型训练数据集划分为训练集、验证集和测试集。Step S223, randomly shuffle the order of the data set similar to the architectural style of the research area and the labeled orthophoto, to establish a model training data set by fusion, and divide the model training data set into training data sets according to a preset ratio set, validation set, and test set. 5.如权利要求4所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S2中,利用训练好的神经网络模型对正射影像进行边界检测,得到建筑正射投影轮廓边界,包括:5. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 4, is characterized in that, in step S2, utilizes trained neural network model to carry out boundary detection to orthophoto, obtains building orthophoto. Projective contour boundaries, including: 步骤S25,基于训练好的神经网络模型可接收影像尺寸确定切割尺寸,基于所述切割尺寸对所述正射影像进行均匀影像切割,以获得正射影像预测集;Step S25, determining a cutting size based on the image size that the trained neural network model can receive, and performing uniform image cutting on the orthophoto based on the cutting size to obtain an orthophoto prediction set; 步骤S26,利用所述训练好的神经网络模型对所有正射影像预测集进行检测,以得到所有正射影像预测图集的建筑物区块二值图。Step S26, using the trained neural network model to detect all orthophoto prediction sets to obtain building block binary maps of all orthophoto prediction atlases. 6.如权利要求1所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S3,将所述建筑正射投影轮廓边界进行规则化,以得到正射影像规则化后的建筑正射投影轮廓边界,包括:6. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 1, it is characterized in that, in step S3, described building orthographic projection outline boundary is regularized, to obtain orthophoto rule The transformed building orthographic outline boundary, including: 步骤S301,基于任意多边形种子填充方法,对正射影像预测图集的建筑物区块二值图进行孔洞填充,以得到填充后的建筑物区块二值图;Step S301, based on the arbitrary polygon seed filling method, fill holes in the binary image of the building block in the orthophoto prediction atlas, so as to obtain the filled binary image of the building block; 步骤S302,利用分水岭算法,在填充后的建筑物区块二值图中分割存在局部连接的建筑区块,以得到分割二值图;Step S302, using the watershed algorithm, segment the building blocks with partial connections in the filled building block binary graph to obtain the segmented binary graph; 步骤S303,利用腐蚀算法,在所述分割二值图中扩大建筑区块之间的缝隙,以得优化后的建筑区块二值图;Step S303, using an erosion algorithm to expand the gaps between the building blocks in the segmented binary graph to obtain an optimized binary graph of the building blocks; 步骤S304,基于二值图轮廓提取算法,对步优化后的建筑区块二值图进行各建筑轮廓的提取;Step S304, based on the binary image outline extraction algorithm, extract the outline of each building on the binary image of the building block after step optimization; 步骤S305,利用多边形拟合曲线方法,对提取到的各建筑轮廓边界进行近似拟合处理,以得到建筑轮廓边界多边形;Step S305, using the polygon fitting curve method, perform approximate fitting processing on the extracted building outline boundaries to obtain building outline boundary polygons; 步骤S306,获取所述建筑轮廓边界多边形中的每条边的长度和方位角;Step S306, obtaining the length and azimuth of each side in the building outline boundary polygon; 步骤S307,比较建筑轮廓边界多边形的每条边的长度,选择最长的边作为主方向;Step S307, compare the length of each side of the building outline boundary polygon, and select the longest side as the main direction; 步骤S308,将建筑轮廓边界绕中心点旋转至与上述主方向垂直或者平行的位置,以得到旋转后的建筑轮廓边界;Step S308, rotate the building outline boundary around the center point to a position perpendicular or parallel to the above-mentioned main direction, to obtain the rotated building outline boundary; 步骤S309,校正旋转后的建筑轮廓边界的相邻边,当相邻边垂直时取交点;当相邻边平行时,基于相邻边的距离阈值,平移短边至长边或者在相邻边增加直线,以最后生成正射影像规则化后的建筑正射投影轮廓边界。Step S309, correcting the adjacent sides of the rotated building outline boundary, and taking the intersection point when the adjacent sides are vertical; when the adjacent sides are parallel, based on the distance threshold of the adjacent sides, translate the short side to the long side or on the adjacent side. Lines are added to finally generate an orthophoto-regularized building orthographic outline boundary. 7.如权利要求6所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S3,对正射影像规则化后的建筑正射投影轮廓边界进行实景模型坐标转换,以得到建筑轮廓每一角点的地理坐标值,包括:7. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 6, it is characterized in that, step S3, carries out real scene model coordinate conversion to the building orthoprojection outline boundary after orthophoto regularization , to get the geographic coordinates of each corner of the building outline, including: 步骤S321,基于GDAL栅格空间数据转换库提取出所述正射影像的仿射矩阵,其中,所述仿射矩阵包含影像左上角点的地理坐标(X,Y)、像素坐标和实际地理坐标的转换缩放比例α;Step S321, extracting the affine matrix of the orthophoto image based on the GDAL raster spatial data conversion library, wherein the affine matrix includes the geographic coordinates (X, Y), pixel coordinates and actual geographic coordinates of the upper left corner of the image The conversion scaling ratio α of ; 步骤S322,构造所述正射影像中任一正射影像规则化后的建筑正射投影轮廓边界的角点像素坐标(x,y)与对应的地理坐标(XCOOR,YCOOR)的转换函数为:Step S322, constructing the conversion function of the corner pixel coordinates (x, y) of the building ortho projection outline boundary and the corresponding geographic coordinates (X COOR , Y COOR ) after any orthophoto regularization in the orthophoto for: XCOOR=X+x+α;X COOR =X+x+α; YCOOR=Y+y+α; YCOOR =Y+y+α; 步骤S322,根据所述转换函数,将每一正射影像规则化后的建筑正射投影轮廓边界的每一角点像素坐标转换为对应地理坐标。Step S322, according to the conversion function, convert the pixel coordinates of each corner point of the boundary of the building ortho-projection outline after the regularization of each ortho-image into corresponding geographic coordinates. 8.如权利要求7所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S3,基于建筑轮廓每一角点的地理坐标值,生成实景模型建筑边界平面矢量图,包括:8. the oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 7, it is characterized in that, step S3, based on the geographic coordinate value of each corner point of building outline, generate realistic model building boundary plane vector illustration ,include: 步骤S331,基于建筑轮廓每一角点的地理坐标值,生成实景模型建筑边界平面矢量图,所述实景模型建筑边界平面矢量图与倾斜摄影三维模型在同一地理坐标系下。Step S331, based on the geographic coordinate value of each corner point of the building outline, generate a real-life model building boundary plane vector map, where the real-life model building boundary plane vector map and the oblique photography 3D model are in the same geographic coordinate system. 9.如权利要求1所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,步骤S4,基于实景模型建筑边界平面矢量图建立建筑包围盒模型,包括:9. The oblique photography model singulation method based on orthophoto boundary detection as claimed in claim 1, it is characterized in that, step S4, builds building bounding box model based on reality model building boundary plane vector illustration, comprising: 步骤S401,根据所述空中三角测量结果,获得空中三角连接点的三维地理坐标信息;Step S401, obtaining three-dimensional geographic coordinate information of an aerial triangulation connection point according to the aerial triangulation result; 步骤S402,根据建筑轮廓每一角点的地理坐标值和空中三角连接点的三维地理坐标,基于射线法原理,判断空中三角连接点和建筑轮廓的包含关系,筛选出位于每个建筑轮廓内的空中三角连接点;Step S402, according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinates of the triangular connection point in the air, based on the principle of the ray method, determine the inclusion relationship between the triangular connection point in the air and the building outline, and filter out the airborne triangular connection points located in each building outline. triangle connection point; 步骤S403,比较每个建筑轮廓内的空中三角连接点高程,获得每个建筑轮廓内的最低高程和最高高程;Step S403, compare the elevations of the triangular connection points in the air in each building outline, and obtain the lowest elevation and the highest elevation in each building outline; 步骤S404,将每个建筑轮廓内的最高高程减去最低高程,得到该建筑单体的包围盒高度;Step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the individual building; 步骤S405,将实景模型建筑边界平面矢量图内的矢量几何多边形作为建筑单体的包围盒模型的下底面,以建筑单体的包围盒高度作为包围盒模型的高,创建建筑单体包围盒多面体模型。Step S405, using the vector geometric polygons in the building boundary plane vector graphics of the reality model as the lower bottom surface of the bounding box model of the single building, and using the height of the bounding box of the building as the height of the bounding box model, creating a single building bounding box polyhedron Model. 10.如权利要求9所述的基于正射影像边界检测的倾斜摄影模型单体化方法,其特征在于,将所述包围盒模型叠加至所述倾斜摄影三维模型上,以得到叠加三维模型,渲染所述叠加三维模型中的三角面片并叠加指定颜色,从而实现倾斜摄影实景模型的自动单体化,包括:10 . The method for singulating an oblique photographic model based on orthophoto boundary detection according to claim 9 , wherein the bounding box model is superimposed on the oblique photographic 3D model to obtain a superimposed 3D model, 11 . Rendering the triangular patches in the superimposed three-dimensional model and superimposing the specified color, so as to realize the automatic singulation of the oblique photography reality model, including: 将建筑单体包围盒模型叠加至倾斜摄影三维模型的建筑轮廓内的最低高程位置上,以得到复合模型,并对所述复合模型进行渲染和叠加指定颜色,从而实现模型单体化的高亮显示,进而实现倾斜摄影实景模型的动态单体化。The single building bounding box model is superimposed on the lowest elevation position within the building outline of the oblique photographic 3D model to obtain a composite model, and the composite model is rendered and superimposed with a specified color, so as to realize the highlight of the single model Display, and then realize the dynamic singulation of the oblique photography reality model.
CN202111373225.8A 2021-11-19 2021-11-19 A method of singulation of oblique photographic model based on orthophoto boundary detection Pending CN114219819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111373225.8A CN114219819A (en) 2021-11-19 2021-11-19 A method of singulation of oblique photographic model based on orthophoto boundary detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111373225.8A CN114219819A (en) 2021-11-19 2021-11-19 A method of singulation of oblique photographic model based on orthophoto boundary detection

Publications (1)

Publication Number Publication Date
CN114219819A true CN114219819A (en) 2022-03-22

Family

ID=80697550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111373225.8A Pending CN114219819A (en) 2021-11-19 2021-11-19 A method of singulation of oblique photographic model based on orthophoto boundary detection

Country Status (1)

Country Link
CN (1) CN114219819A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114429530A (en) * 2022-04-06 2022-05-03 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN115063551A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115994987A (en) * 2023-03-21 2023-04-21 天津市勘察设计院集团有限公司 Rural building extraction and vectorization method based on inclined three-dimensional model
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Method and device for all-element singulation of oblique photography model based on deep learning
CN116664581A (en) * 2023-08-02 2023-08-29 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN117173341A (en) * 2023-10-15 2023-12-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence
CN119004643A (en) * 2024-10-25 2024-11-22 上海建工四建集团有限公司 Intelligent drawing method and system for picture frame based on computer vision and pixel clustering

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903304A (en) * 2019-02-25 2019-06-18 武汉大学 An Algorithm for Automatically Extracting Building Outlines Based on Convolutional Neural Network and Polygon Regularization
CN110310355A (en) * 2019-06-21 2019-10-08 永州电力勘测设计院有限公司 Oblique photograph model monomerization approach based on multitexture mapping
CN110379004A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 The method that a kind of pair of oblique photograph achievement carries out terrain classification and singulation is extracted
CN110838129A (en) * 2019-11-18 2020-02-25 四川视慧智图空间信息技术有限公司 Three-dimensional building model contour characteristic line extraction method based on oblique photogrammetry
CN111325684A (en) * 2020-02-01 2020-06-23 武汉大学 A semi-automatic high spatial resolution remote sensing image extraction method for buildings of different shapes
CN112184908A (en) * 2020-09-07 2021-01-05 山西省工业设备安装集团有限公司 3D Tiles format model bounding box data generation method for realizing oblique photography model based on Cesum

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903304A (en) * 2019-02-25 2019-06-18 武汉大学 An Algorithm for Automatically Extracting Building Outlines Based on Convolutional Neural Network and Polygon Regularization
CN110310355A (en) * 2019-06-21 2019-10-08 永州电力勘测设计院有限公司 Oblique photograph model monomerization approach based on multitexture mapping
CN110379004A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 The method that a kind of pair of oblique photograph achievement carries out terrain classification and singulation is extracted
CN110838129A (en) * 2019-11-18 2020-02-25 四川视慧智图空间信息技术有限公司 Three-dimensional building model contour characteristic line extraction method based on oblique photogrammetry
CN111325684A (en) * 2020-02-01 2020-06-23 武汉大学 A semi-automatic high spatial resolution remote sensing image extraction method for buildings of different shapes
CN112184908A (en) * 2020-09-07 2021-01-05 山西省工业设备安装集团有限公司 3D Tiles format model bounding box data generation method for realizing oblique photography model based on Cesum

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
耿中元;王凤;刘飞;王涛;唐婷婷;胡晨希;: "倾斜航空摄影实景三维模型技术研究及应用", 北京测绘, no. 06, 25 December 2017 (2017-12-25), pages 2 *
郭力刚;郭立凯;: "面向像素工厂的刺点片自动化生成方法研究", 测绘与空间地理信息, vol. 41, no. 08, 25 August 2018 (2018-08-25), pages 216 - 218 *
陈杭;: "一种基于多重纹理映射的倾斜摄影模型单体化技术", 测绘与空间地理信息, no. 10, 25 October 2018 (2018-10-25) *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114429530A (en) * 2022-04-06 2022-05-03 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN114429530B (en) * 2022-04-06 2022-06-24 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN115063551A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115063551B (en) * 2022-08-18 2022-11-22 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115994987A (en) * 2023-03-21 2023-04-21 天津市勘察设计院集团有限公司 Rural building extraction and vectorization method based on inclined three-dimensional model
CN116597150B (en) * 2023-07-14 2023-09-22 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Method and device for all-element singulation of oblique photography model based on deep learning
CN116664581A (en) * 2023-08-02 2023-08-29 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN116664581B (en) * 2023-08-02 2023-11-10 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN117173341A (en) * 2023-10-15 2023-12-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117173341B (en) * 2023-10-15 2024-07-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence
CN117454495B (en) * 2023-12-25 2024-03-15 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence
CN119004643A (en) * 2024-10-25 2024-11-22 上海建工四建集团有限公司 Intelligent drawing method and system for picture frame based on computer vision and pixel clustering
CN119004643B (en) * 2024-10-25 2025-01-03 上海建工四建集团有限公司 Intelligent drawing method and system for picture frame based on computer vision and pixel clustering

Similar Documents

Publication Publication Date Title
CN114219819A (en) A method of singulation of oblique photographic model based on orthophoto boundary detection
US7133551B2 (en) Semi-automatic reconstruction method of 3-D building models using building outline segments
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN113192193A (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
CN116543117B (en) A high-precision three-dimensional modeling method for large scenes from drone images
US20240005599A1 (en) Data normalization of aerial images
CN104157011A (en) Modeling method for three-dimensional terrain
RU2612571C1 (en) Method and system for recognizing urban facilities
Zhao et al. Completing point clouds using structural constraints for large-scale points absence in 3D building reconstruction
JP2002092658A (en) Three-dimensional digital map creation device and storage medium storing three-dimensional digital map creation program
KR101079475B1 (en) 3D Urban Spatial Information Construction System Using Point Cloud Filtering
CN110163962A (en) Method for outputting actual terrain contour line based on Smart 3D oblique photography technology
JP2014126537A (en) Coordinate correction device, coordinate correction program, and coordinate correction method
KR101079531B1 (en) Road layer generation system using point cloud data
CN116543116A (en) Method, system, equipment and terminal for three-dimensional virtual visual modeling of outcrop in field
CN116310756A (en) Remains identification method, remains identification device, electronic equipment and computer storage medium
JP7204087B2 (en) Object recognition device
Gao et al. CUS3D: A New Comprehensive Urban-Scale Semantic-Segmentation 3D Benchmark Dataset
KR101114904B1 (en) Urban Spatial Information Construction System and Method Using Dohwawon and Aeronautical Laser Survey Data
CN113192204A (en) Three-dimensional reconstruction method of building in single inclined remote sensing image
Kim et al. Automatic Method for Generating 3d Building Models with Texture from Uav Images
Chio et al. The establishment of 3D LOD2 objectivization building models based on data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination