CN104484668A - Unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting building contour line - Google Patents

Unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting building contour line Download PDF

Info

Publication number
CN104484668A
CN104484668A CN201510025503.9A CN201510025503A CN104484668A CN 104484668 A CN104484668 A CN 104484668A CN 201510025503 A CN201510025503 A CN 201510025503A CN 104484668 A CN104484668 A CN 104484668A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
point
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510025503.9A
Other languages
Chinese (zh)
Other versions
CN104484668B (en
Inventor
眭海刚
涂继辉
贾曲
宋志娜
陈�光
徐川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201510025503.9A priority Critical patent/CN104484668B/en
Publication of CN104484668A publication Critical patent/CN104484668A/en
Application granted granted Critical
Publication of CN104484668B publication Critical patent/CN104484668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Measurement Of Radiation (AREA)

Abstract

The invention discloses an unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting a building contour line. The unmanned aerial vehicle multi-overlapped-remote-sensing-image method includes the steps that three-dimensional point cloud is generated with an aerial-triangulation and dense-matching combined method and filtered, and a building is detected from the point cloud; after the walls of the detected building are canceled, the general contour of the building is extracted from building top face information; the general contour of the building serves as a buffering area to be overlapped on spliced images and serves as shape prior information, evolution is carried out in the buffering area with a level set algorithm, and finally an accurate contour of the building is obtained. By means of the unmanned aerial vehicle multi-overlapped-remote-sensing-image method, as point cloud three-dimensional information generated by the multiple overlapped images is sufficiently used, and meanwhile the high-accuracy geometrical information of the high-resolution remote sensing images is used in a combined mode, the building contour extracting accuracy is remarkably improved, and the complexity of the method is lowered.

Description

Building contour line extraction method for unmanned aerial vehicle multi-overlapping remote sensing image
Technical Field
The invention relates to the technical field of remote sensing image application, in particular to a method for extracting a building contour line of an unmanned aerial vehicle remote sensing image.
Background
The building is an important geographic space element in a city, and has important significance in the application fields of city planning and management, city development and change, disaster detection and evaluation and the like. Building contour extraction is an important step in the establishment and updating of urban basic geographic information systems. As a novel remote sensing monitoring platform, the unmanned aerial vehicle has high flight operation intelligent degree, can autonomously fly and shoot according to a preset air route, provides remote sensing monitoring data and low-altitude video monitoring in real time, and has the characteristics of strong maneuverability, convenience, low cost and the like.
The high-resolution remote sensing image contains a large amount of abundant information, and the building outline extraction is often interfered by various other ground objects, such as building and non-building distinction, shielding of trees around the building, influence of road borders and the like. Therefore, the extraction of the building outline is performed only by a single image, and the technical difficulty is great. The extraction of the building outline not only needs to rely on the extraction and analysis of remote sensing two-dimensional information, but also needs to be combined with the building three-dimensional information, so that the two-dimensional information and the three-dimensional information are mutually fused and supplemented, and the extraction of the building outline in the remote sensing image is more facilitated. The typical method for extracting the building outline by using the high-resolution remote sensing image comprises the following steps: 1) and extracting the contour line of the building based on the single high-resolution remote sensing image. Although the remote sensing image with high resolution has clear building outline information, artificial buildings and non-buildings are difficult to distinguish, and in addition, tree sheltering around the buildings also generates certain interference on the outlines of the buildings, so the method has certain limitations. 2) Building contour extraction based on shadow assistance. Although building contour extraction with the aid of shadows indirectly utilizes height information of buildings, shadow extraction does not have certain universality, and parameters related to the requirement of obtaining the height of the buildings by utilizing the shadows are more, so that the method is difficult to meet the actual requirement. 3) And extracting the contour line of the building based on the Lidar and the remote sensing image. Although the method not only utilizes three-dimensional information of Lidar, but also integrates high-precision geometric outline information of images, and extracts the outline information of the building through mutual complementation of the advantages and disadvantages of two kinds of data. However, this method has difficulty in high-precision registration of Lidar and remote sensing images, and the Lidar data acquisition cost is expensive. 4) And extracting the contour line of the building based on the three-dimensional aerial image. Although this type of method obtains three-dimensional information by stereo matching, it extracts building contour information by the complementation of two types of information, while using high-precision two-dimensional information of images. However, the problem of this method is that the stereo image has a small size, which has a certain influence on extracting the contour of the buildings in the large area. Therefore, a method which is easy to obtain data, has a high automation degree of an extraction method, has a relatively high extraction result and meets the actual production needs is urgently sought.
Disclosure of Invention
The invention fully utilizes the point cloud three-dimensional information generated by the multiple overlapped images, and simultaneously combines the high-precision information of the remote sensing image, thereby obviously improving the precision of extracting the building outline.
The technical scheme of the invention is a building contour line extraction method of unmanned aerial vehicle multi-overlapping remote sensing images, which comprises the following steps:
the method comprises the following steps that firstly, adjustment is carried out on three pairs of unmanned aerial vehicle remote sensing images by utilizing an aerial vehicle, and meanwhile, the images are subjected to dense matching by utilizing a PMVS algorithm after GPU acceleration, and finally dense color point clouds with high precision are obtained;
splicing the remote sensing images of the unmanned aerial vehicle after the leveling;
step three, filtering the color point cloud; firstly, an improved morphological filtering algorithm is utilized to separate the ground from the non-ground, then, color invariants are utilized to filter vegetation in ground points, and finally, elevations and areas are utilized as thresholds to filter non-buildings;
detecting buildings in the point cloud by using a region growing algorithm;
deleting the wall surface of the building, and fitting the boundary of the top surface to finally obtain the rough contour information of the building;
step six, the coarse building contour obtained in the step three is used as a buffer area for building contour extraction and is superposed on the spliced image;
and step seven, simultaneously, utilizing the shape of the building rough contour as prior information, and evolving the building precise contour in the buffer area by using a level set algorithm.
The first step comprises the following steps:
(1.1) preprocessing the remote sensing image of the multi-view overlapping unmanned aerial vehicle by using prior information:
(1.2) carrying out aerial three-dimensional photogrammetry on the basis of the step (1.1), solving external orientation elements of each image by utilizing an aerial three-dimensional mesh, and carrying out integral adjustment of a beam method;
and (1.3) according to the image groups, on the basis of the step (1..2), performing rapid dense matching by using a PMVS (graphics processing unit) algorithm accelerated by a GPU (graphics processing unit) in the prior art to generate dense three-dimensional point cloud, and using the reconstructed point cloud as three-dimensional elevation data.
The second step comprises the following steps:
(2.1) feature extraction: extracting the features of the image by using SIFT;
(2.2) image registration: firstly, coarse registration is carried out, and matched feature points are searched by using a k-d tree; then, fine registration is carried out, and rough registration often has wrong matching points, so that the RANSAC algorithm is used for eliminating the wrong matching points; obtaining a transformation matrix between the images through the registration of the images;
(2.3) image stitching: splicing images through the transformation matrix obtained in the step (2.2);
(2.4) fusion of images: and after splicing, fusing the images by using a bilinear interpolation algorithm.
The third step comprises the following steps:
(3.1) separating the ground points and the non-ground points of the point cloud by using improved morphological filtering;
(3.2) filtering the vegetation by using the color invariant in the ground points in the step (3.1);
and (3.3) filtering out non-building target points by using a threshold value based on the characteristics of the building.
The step (3.1) comprises the following processes:
firstly, any point and neighborhood points thereof are taken to form a window with a fixed size, the lowest point in the window is detected through morphological opening operation, if the difference between the elevation value of the point in the window and the elevation of the lowest point is in a threshold range, the point is represented as a ground point, and all points in the point cloud are taken out for filtering;
then according to y2 xwk+1, obtaining the size of a window required by next filtering, wherein the size of the window is smaller than the maximum value of a preset filtering window, and performing morphological filtering again; finally, when the window is larger than the preset window, finishing filtering; wherein k is the iteration number, and w is the size of the previous filtering window.
The step (3.2) comprises the following processes:
because the point cloud generated by image dense matching has color information, the green vegetation is filtered by utilizing the color invariant theory; let the coordinates of each point in the point cloud be (x, y, z), the color three channels be (R, G, B), and the threshold value of the color invariant for vegetation be TgThe color invariant formula defined using the green and blue channels is:
<math> <mrow> <msub> <mi>&psi;</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>4</mn> <mi>&pi;</mi> </mfrac> <mo>&times;</mo> <mi>arctan</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, Ig(x,y,z)、Ib(x, y, z) represents the green and blue component values of the point cloud at the (x, y, z) point;
ψg(x, y, z) represents the average of the values in (x,y, z) color invariant value of the dot; when psig<TgWhen the point is a vegetation point, the point is represented as a vegetation point; when psig>TgWhen it is, it means that the point is a non-vegetation point.
Fourthly, the point cloud in the third step is segmented by using a region growing method based on plane fitting to obtain a point cloud region V of each buildingBiThe method specifically comprises the following steps:
(4.1) the points in the point cloud are divided into two types: if one point in the neighborhood of the point is default, the point belongs to the boundary point; otherwise such points belong to interior points;
(4.2) setting a certain interior point p0(x0,y0,z0),p0The eight neighborhood points of (a): { p1,p2,…,p8And fitting a plane equation (2) to the 9 points by using a least square method, and specifically realizing the following steps:
z=ax+by+c (2)
for point p0And 8 points within the neighborhood: (x)i,yi,zi) I is 0,1,2, …, 7; fitting calculation formula (2) mean
A surface equation that minimizes equation (3);
<math> <mrow> <mi>S</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>ax</mi> <mo>+</mo> <mi>by</mi> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
to minimize S, one should satisfy:
from this the following normal system of equations can be derived:
<math> <mrow> <mfenced open='|' close='|'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>x</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>y</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>a</mi> </mtd> </mtr> <mtr> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>i</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mn>7</mn> <mo>;</mo> <mi>n</mi> <mo>=</mo> <mn>8</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
(4.3) finding the SSD and the variance of any interior point in the 8 neighborhood according to the formula (4):
<math> <mrow> <mi>SSD</mi> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>M</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>h</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M is p0And its set of eight neighborhood points, hkAnd zkRespectively obtaining an observation elevation value and a fitting plane elevation value;
(4.4) sequencing each point in the point cloud in an ascending order according to the SSD value, and taking the point with the smallest SSD value as a seed point;
(4.5) calculating the distance h from each point to the plane where the seed points are located in the neighborhood of the seed points, if h is<hT,hTIf the elevation threshold is the elevation threshold, merging the elevation threshold into the same area, and marking the area; otherwise, this is a non-patch point;
and (4.6) when all the neighborhood points are traversed, finding unprocessed points from the new SSD value as new seed points, and repeating the operation of (4.5) until all the points are traversed.
Step five, the point cloud area V of each building in the step fourBiDeleting wall points of the building, and finally obtaining rough contour information of the building by solving top surface boundary points, wherein the method specifically comprises the following steps:
(5.1) according to the divided surface patches in the step four, if the normal vector of the surface patch is parallel to the ground, the surface patch is indicated as a wall surface patch; if the normal vector of the patch is vertical to the ground, the patch is a roof patch; therefore, deleting the wall surface patches according to the normal vector;
(5.2) obtaining the boundary of the point cloud by utilizing an alpha-shape algorithm according to the top surface point cloud obtained in the step (5.1); the specific method comprises the following steps:
a) extracting an arbitrary point p1 from the top surface point cloud, searching a point set p2 with a distance of less than or equal to 2 alpha from the remaining points, wherein a is the radius of a circle, and setting p2 to { p21, p22, p23, … and p2n };
b) taking out a point p2i from any point p2, and finding out the circle center p0 of the points p1 and p2i by using the formulas (6) and (7); knowing two points (x1, y1), (x2, y2) and the radius α of the circle, the equation for the center of the circle (x0, y0) is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>1</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
it is difficult to directly solve the equation, so the distance intersection algorithm in mapping is used to obtain:
x 0 = x 1 + x 2 - x 1 2 + H ( y 2 - y 1 ) y 0 = y 1 + y 2 - y 1 2 + H ( x 2 - x 1 ) - - - ( 7 )
wherein,S2=(x1-x2)2+(y1-y2)2
c) finding the distances l from all points to p0 from the point set p2, if l > alpha, then p1 and p2i are boundary points; if l < alpha, the next step d) is carried out;
d) repeating the three steps a) b) c) for other points in p2 until all points in p2 are judged to be finished.
And step six, superposing the top surface point cloud contour points of each building obtained in the step five on the image to obtain a building buffer region omega on the imageBi(ii) a The specific process is as follows:
setting a projection matrix P3×4Computing P as a known inside and outside orientation element representing a single view geometry3×4Matrix:
P = - f 0 x 0 0 f y 0 0 0 1 R T 1 0 0 - X s 0 1 0 - Y s 0 0 1 - Z s - - - ( 8 )
wherein P is the projection matrix P3×4F is the focal length of the image, x0And y0The eccentricity of the optical axis from the optical center in the horizontal direction and the vertical direction; xs、Ys、ZsAs the coordinates of the camera center in the world coordinate system, RTRepresents a 3 × 3 rotation matrix;
the projection formula is:
xz yz z = P X Y Z 1 - - - ( 9 )
calculating the projection of the point on the point cloud on the image by using the formula (8) and the formula (9); x, Y and Z represent the coordinates of the object point in the camera coordinate system, and X, Y and Z represent the coordinates of the object point in the world coordinate system.
Step seven, performing boundary connection on the points which are superposed on the image in the step six, superposing the obtained building rough contour on the spliced image as a buffer area, and simultaneously, utilizing the shape of the building rough contour as prior information and evolving the building precise contour in the buffer area by using a level set algorithm; the method comprises the following specific steps:
(7.1) carrying out boundary tracking on the points superposed on the image to obtain a closed contour; expanding the contour to obtain a buffer region omega for extracting the contour of the building on the imageBi
(7.2) performing local level set evolution in the remote sensing image in the buffer area by taking the contour of the image area obtained according to the step (7.1) as an initial level set and taking the contour shape of the image area obtained according to the step (7.1) as prior information to obtain a fine contour of a building of the remote sensing image; the specific implementation is as follows:
first, the buffer area of each building on the image is preset to be omegaBiThe shape of the contour obtained in (7.1) is [ phi ]Pi,ФPiAs a prior shape, phiSiAs a target to segment the contour, basisThe energy functional for shape constraints is defined as follows:
Eto=E(c1,c2Si)+EshSiP1) (10)
the initial contour and the prior shape of the target are expressed by utilizing a moving target area:
<math> <mrow> <msub> <mi>E</mi> <mi>sh</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&Integral;</mo> <msub> <mi>&Omega;</mi> <mi>Bi</mi> </msub> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>dxdy</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
the energy function of the prior art based on the level set C-V model is:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>E</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <mi>&mu;</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> <mi>v</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> <mi>dxdy</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
where H (φ) is a Heaviside function of the form: <math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
(phi) is a Dirac function of the formu0(x, y) is the gray value of a certain point in the image area to be processed,modulus of the gradient of the current point, coefficient lambda1,λ2More than 0, mu, v ≥ 0 is a fixed parameter, and is generally selected as lambda1=λ21, μ ═ 0, v ═ 1, and (12) partial differential equations corresponding to the following equations:
<math> <mrow> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mo>[</mo> <mi>&mu;</mi> <mo>&dtri;</mo> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&dtri;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the parameter c1、c2Obtained according to the following formula,
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> </mfrac> </mtd> <mtd> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
according to the formula (13), a C-V level set evolution method is adopted to segment and extract the building region which is used as an image
Object v of building segmentationiSegThe C-V level set is prior art.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) although the data source is a single data source, two types of data information can be generated, and the effect of extracting the building outline by the two types of data sources is achieved. The complexity of the method is reduced, and the production cost is also saved.
(2) The three-dimensional information of the three-dimensional point cloud generated by the image is utilized to obtain the outline of the rough building, and meanwhile, the high-precision outline information of the high-resolution image is utilized, and the high-precision outline information and the high-resolution image are mutually supplemented and fused, so that the automation degree and the precision of the extraction of the outline of the building are improved.
(3) The building rough contour extracted by the three-dimensional point cloud information is used as the prior shape information of level set evolution and the buffer area for extracting the building contour, so that the speed and the precision of the level set evolution building contour are ensured.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a group chart of an image of point A in an oscillogram in step one.
Detailed Description
The invention provides a building contour line extraction method of an unmanned aerial vehicle remote sensing image, which comprises the steps of firstly utilizing a space-three algorithm in photogrammetry to combine with multi-view geometric stereo reconstruction in computer vision to quickly generate three-dimensional point cloud with geographic coordinates, then extracting rough contour information of a building through three-dimensional point cloud segmentation, establishing a buffer area for the rough contour information of the building to be superposed on a high-resolution image, then utilizing the rough contour as prior shape information, and utilizing a level set to carry out evolution iteration in the buffer area to obtain the precise geometric contour of the building. Since the three-dimensional point cloud is generated by image dense matching, there is no registration difficulty between the point cloud and the image. The technical solution of the present invention is described in detail below with reference to the accompanying drawings and embodiments, wherein a flow chart is shown in fig. 1, and the technical solution flow of the embodiments includes the following steps:
the method comprises the steps of firstly, carrying out adjustment on remote sensing images of the unmanned aerial vehicle by utilizing the air-to-three pairs, and carrying out dense matching on the images by utilizing a PMVS algorithm after GPU acceleration to finally obtain dense color point clouds with high precision and geographic coordinates. The specific implementation is as follows:
(1) preprocessing the remote sensing image of the multi-view overlapping unmanned aerial vehicle by using prior information:
there is certain overlap degree between the adjacent image of unmanned aerial vehicle aerial photograph. Because the data volume of taking photo by plane is very big, directly carry out three-dimensional reconstruction, on the one hand can not obtain better reconstruction effect, on the other hand can make the calculated amount of rebuilding big, the reconstruction time is longer on the other hand. Therefore, the images are grouped by using the existing POS information and the air belt prior information. Since the course overlapping degree of the unmanned aerial vehicle images is 80% in implementation and the side overlapping degree is 35%, 4 continuous images of the same flight band and two continuous images between the flight bands are divided into a group for a certain image. As shown in fig. 2, the image of point a is grouped in the space band diagram, and the black rectangular dotted line frame part is the image grouped in the same group as the image a.
(2) And (3) performing aerial three-dimensional photogrammetry on the basis of the step (1), solving external orientation elements of each image by utilizing an aerial three-dimensional mesh, and performing integral adjustment of a beam method. The step can be implemented by the prior art, and the invention is not described in detail.
(3) And (3) according to image grouping, performing rapid dense matching by using a PMVS algorithm accelerated by a GPU in the prior art on the basis of the step (2), generating dense three-dimensional point cloud, and using the reconstructed point cloud as three-dimensional elevation data. And step two, splicing the remote sensing images of the unmanned aerial vehicle after the adjustment. The specific process is as follows:
(1) feature extraction: and (5) utilizing SIFT to extract the features of the image.
(2) Image registration: firstly, coarse registration is carried out, and matched feature points are searched by using a k-d tree; and then carrying out fine registration, wherein the rough registration often has wrong matching points, so that the RANSAC algorithm is used for removing the wrong matching points. By registration of the images, a transformation matrix between the images can be obtained.
(3) And (3) splicing images: and (3) splicing images through the transformation matrix obtained in the step (2).
(4) Fusing images: and after splicing, fusing the images by using a bilinear interpolation algorithm.
And step three, filtering the color point cloud. The specific implementation is as follows:
(1) ground points and non-ground points of the point cloud are separated using improved morphological filtering. The operation process is as follows: firstly, any point and neighborhood points thereof are taken to form a window with a fixed size, the lowest point in the window is detected through morphological opening operation, if the difference between the elevation value of the point in the window and the elevation of the lowest point is in a threshold range, the point is represented as a ground point, and all points in the point cloud are taken out for filtering; then according to y2 xwk+1 (where k is the number of iterations, and w is the size of the previous filtering window) to obtain the size of the window required by the next filtering, and the size of the window is smaller than the preset maximum value of the filtering window, and morphological filtering is performed once; and finally, when the window is larger than the preset window, finishing filtering.
(2) And (3) filtering the vegetation by utilizing the color invariant in the ground points in the step (1). Because the point cloud generated by image dense matching has color information, the green vegetation can be filtered by using a color invariant theory. Let the coordinates of each point in the point cloud be (x, y, z), the color three channels be (R, G, B), and the threshold value of the color invariant for vegetation be TgThe color invariant formula defined using the green and blue channels is:
<math> <mrow> <msub> <mi>&psi;</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>4</mn> <mi>&pi;</mi> </mfrac> <mo>&times;</mo> <mi>arctan</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, Ig(x,y,z)、Ib(x, y, z) represents the green and blue component values of the point cloud at the (x, y, z) point. Psig(x, y, z) represents a color invariant value at the (x, y, z) point. When psig<TgWhen the point is a vegetation point, the point is represented as a vegetation point; when psig>TgWhen it is, it means that the point is a non-vegetation point.
(3) Based on the characteristics of the building, for example, the height should not be less than 2 meters, and the area should not be less than 35 square meters. Non-building target points are filtered out using a threshold.
Step four, utilizing a region growing method based on plane fitting to segment the point cloud in the step three to obtain a point cloud region V of each buildingBi. The specific process is as follows:
(1) points in the point cloud are divided into two categories: if one point in the neighborhood of the point is default, the point belongs to the boundary point; otherwise such points belong to interior points.
(2) Let a certain interior point p0(x0,y0,z0),p0The eight neighborhood points of (a): { p1,p2,…,p8}. And fitting the plane equation (2) to the 9 points by using a least square method, wherein the method is specifically realized as follows:
z=ax+by+c (2)
wherein a, b and c represent parameters of a plane equation of formula (2); for point p0And 8 points within the neighborhood:
(xi,yi,zi) I is 0,1,2, …, 7; fitting the calculated equation (2) to the plane equation minimizes equation (3).
<math> <mrow> <mi>S</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>ax</mi> <mo>+</mo> <mi>by</mi> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
To minimize S, one should satisfy:
from this the following normal system of equations can be derived:
<math> <mrow> <mfenced open='|' close='|'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>x</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>y</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>a</mi> </mtd> </mtr> <mtr> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>i</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mn>7</mn> <mo>;</mo> <mi>n</mi> <mo>=</mo> <mn>8</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
(3) the SSD and the variance of any interior point in the 8 neighborhood are found according to equation (4):
<math> <mrow> <mi>SSD</mi> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>M</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>h</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M is p0And its set of eight neighborhood points, hkAnd zkRespectively, an observation elevation value and a fitting plane elevation value.
(4) And sequencing each point in the point cloud in an ascending order according to the SSD value, and taking the point with the minimum SSD value as a seed point.
(5) Calculating the distance h from each point to the plane of the seed point in the neighborhood of the seed point, if h<hTAnd merging into the same area and marking the area. Otherwise, this is a non-patch point. The corresponding threshold value can be preset by the person skilled in the art.
(6) And (5) when all the neighborhood points are traversed, finding unprocessed points from the new SSD value as new seed points, and repeating the operation in the step (5) until all the points are traversed.
Step five, carrying out point cloud area V on each building in the step fourBiAnd deleting wall points of the building, and finally obtaining the rough contour information of the building by solving the boundary points of the top surface. The specific implementation process is as follows:
(1) according to the divided surface patches in the step four, if the normal vector of the surface patch is parallel to the ground, the description is a wall surface patch; if the normal vector of the patch is perpendicular to the ground, it is described as a roof patch. The wall surface patches are deleted according to the normal vectors.
(2) And (4) obtaining the boundary of the point cloud by utilizing an alpha-shape algorithm according to the top surface point cloud obtained in the step (1). The specific method comprises the following steps: a) extracting an arbitrary point p1 from the top surface point cloud, searching a point set p2 with a distance of less than or equal to 2 alpha (alpha is the radius of a circle) from the rest points, and setting p2 to { p21, p22, p23, … and p2n }; b) taking out a point p2i from any point p2, and finding out the circle center p0 of the points p1 and p2i by using the formulas (6) and (7); knowing two points (x1, y1), (x2, y2) and the radius α of the circle, the equation for the center of the circle (x0, y0) is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>1</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
it is difficult to directly solve the equation, so the distance intersection algorithm in mapping is used to obtain:
x 0 = x 1 + x 2 - x 1 2 + H ( y 2 - y 1 ) y 0 = y 1 + y 2 - y 1 2 + H ( x 2 - x 1 ) - - - ( 7 )
wherein,S2=(x1-x2)2+(y1-y2)2
c) finding the distances l from all points to p0 from the point set p2, if l > alpha, then p1 and p2i are boundary points; if l < alpha, the next step d) is carried out; d) repeating the three steps a) b) c) for other points in p2 until all points in p2 are judged to be finished.
Step six, the top surface point cloud contour points of each building obtained in the step five are superposed on the image to obtain a building buffer area omega on the imageBi. Since the boundary points of the top surface of the building in the step five are obtained by image multi-view stereo matching reconstruction, the points are re-projected back to the image only by utilizing a two-dimensional to three-dimensional projection matrix for inverse operation. The specific process is as follows:
setting a projection matrix P3×4Computing P as a known inside and outside orientation element representing a single view geometry3×4Matrix:
P = - f 0 x 0 0 f y 0 0 0 1 R T 1 0 0 - X s 0 1 0 - Y s 0 0 1 - Z s - - - ( 8 )
wherein P is the projection matrix P3×4F is the focal length of the image, x0And y0The eccentricity of the optical axis from the optical center in the horizontal direction and the vertical direction. Xs、Ys、ZsAs the coordinates of the camera center in the world coordinate system, RTRepresenting a 3 x 3 rotation matrix.
The projection formula is:
xz yz z = P X Y Z 1 - - - ( 9 )
the projection of the point on the point cloud onto the image is calculated using equations (8) and (9). X, Y and Z represent the coordinates of the object point in the camera coordinate system, and X, Y and Z represent the coordinates of the object point in the world coordinate system.
And step seven, performing boundary connection on the points which are superposed on the image in the step six, superposing the obtained building rough contour on the spliced image as a buffer area, and simultaneously, utilizing the shape of the building rough contour as prior information and evolving the building precise contour in the buffer area by using a level set algorithm. The specific process is as follows:
(1) and carrying out boundary tracking on the points superposed on the image to obtain a closed contour. Expanding the contour to obtain a buffer region omega for extracting the contour of the building on the imageBi
(2) And (3) obtaining the contour of the image area as an initial level set according to the step (1), and performing local level set evolution in the remote sensing image in the buffer area by using the contour shape of the image area obtained according to the step (1) as prior information to obtain the fine contour of the building of the remote sensing image. The specific implementation is as follows:
first, the buffer area of each building on the image is preset to be omegaBiThe shape of the contour obtained in (1) is [ ]Pi,ФPiAs a prior shape, phiSiAs a target segmentation contour, an energy functional based on shape constraint is defined as follows:
Eto=E(c1,c2Si)+EshSiP1) (10)
the initial contour and the prior shape of the target can be expressed by a moving target region:
<math> <mrow> <msub> <mi>E</mi> <mi>sh</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&Integral;</mo> <msub> <mi>&Omega;</mi> <mi>Bi</mi> </msub> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>dxdy</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
the energy function of the prior art based on the level set C-V model is:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>E</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <mi>&mu;</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> <mi>v</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> <mi>dxdy</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
where H (φ) is a Heaviside function of the form: <math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
(phi) is a Dirac function of the formu0(x, y) is the gray value of a certain point in the image area to be processed,modulus of the gradient of the current point, coefficient lambda1,λ2More than 0, mu, v ≧ 0 are fixed parameters, and it is generally recommended to take lambda1=λ21, μ ═ 0, v ═ 1, and (12) partial differential equations corresponding to the following equations:
<math> <mrow> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mo>[</mo> <mi>&mu;</mi> <mo>&dtri;</mo> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&dtri;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the parameter c1、c2Obtained according to the following formula,
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> </mfrac> </mtd> <mtd> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </math>
according to the formula (13), a C-V level set evolution method is adopted to segment and extract the building region which is used as an image
Object v of building segmentationiSegThe C-V level set is prior art and is not described in detail herein.
Is a distance dBuffer1V is the corresponding position on the remote sensing image after the disasteriEstablishing a local buffer viBufBuffer viBufThe image area within the outline is the current area omega to be processed. dBuffer1Can be preset by the technicians in the field according to the situation.

Claims (10)

1. A building contour line extraction method of unmanned aerial vehicle multi-overlapping remote sensing images is characterized by comprising the following steps:
the method comprises the following steps that firstly, adjustment is carried out on three pairs of unmanned aerial vehicle remote sensing images by utilizing an aerial vehicle, and meanwhile, the images are subjected to dense matching by utilizing a PMVS algorithm after GPU acceleration, and finally dense color point clouds with high precision are obtained;
splicing the remote sensing images of the unmanned aerial vehicle after the leveling;
step three, filtering the color point cloud;
firstly, an improved morphological filtering algorithm is utilized to separate the ground from the non-ground, then, color invariants are utilized to filter vegetation in ground points, and finally, elevations and areas are utilized as thresholds to filter non-buildings;
detecting buildings in the point cloud by using a region growing algorithm;
deleting the wall surface of the building, and fitting the boundary of the top surface to finally obtain the rough contour information of the building;
step six, the coarse building contour obtained in the step three is used as a buffer area for building contour extraction and is superposed on the spliced image;
and step seven, simultaneously, utilizing the shape of the building rough contour as prior information, and evolving the building precise contour in the buffer area by using a level set algorithm.
2. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein the first step comprises the following steps:
(1.1) preprocessing the remote sensing image of the multi-view overlapping unmanned aerial vehicle by using prior information:
(1.2) carrying out aerial three-dimensional photogrammetry on the basis of the step (1.1), solving external orientation elements of each image by utilizing an aerial three-dimensional mesh, and carrying out integral adjustment of a beam method;
and (1.3) according to the image groups, on the basis of the step (1..2), performing rapid dense matching by using a PMVS (graphics processing unit) algorithm accelerated by a GPU (graphics processing unit) in the prior art to generate dense three-dimensional point cloud, and using the reconstructed point cloud as three-dimensional elevation data.
3. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein the second step comprises the following steps:
(2.1) feature extraction: extracting the features of the image by using SIFT;
(2.2) image registration: firstly, coarse registration is carried out, and matched feature points are searched by using a k-d tree; then, fine registration is carried out, and rough registration often has wrong matching points, so that the RANSAC algorithm is used for eliminating the wrong matching points; obtaining a transformation matrix between the images through the registration of the images;
(2.3) image stitching: splicing images through the transformation matrix obtained in the step (2.2);
(2.4) fusion of images: and after splicing, fusing the images by using a bilinear interpolation algorithm.
4. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein the third step comprises the following steps:
(3.1) separating the ground points and the non-ground points of the point cloud by using improved morphological filtering;
(3.2) filtering the vegetation by using the color invariant in the ground points in the step (3.1);
and (3.3) filtering out non-building target points by using a threshold value based on the characteristics of the building.
5. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapped remote sensing image according to claim 4, wherein the step (3.1) comprises the following processes:
firstly, any point and neighborhood points thereof are taken to form a window with a fixed size, the lowest point in the window is detected through morphological opening operation, if the difference between the elevation value of the point in the window and the elevation of the lowest point is in a threshold range, the point is represented as a ground point, and all points in the point cloud are taken out for filtering;
then according to y2 xwk+1, obtaining the size of a window required by next filtering, wherein the size of the window is smaller than the maximum value of a preset filtering window, and performing morphological filtering again; finally, when the window is larger than the preset window, finishing filtering; wherein k is the iteration number, and w is the size of the previous filtering window.
6. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapped remote sensing image according to claim 4, wherein the step (3.2) comprises the following processes:
because the point cloud generated by image dense matching has color information, the green vegetation is filtered by utilizing the color invariant theory; let the coordinates of each point in the point cloud be (x, y, z), the color three channels be (R, G, B), and the threshold value of the color invariant for vegetation be TgThe color invariant formula defined using the green and blue channels is:
<math> <mrow> <msub> <mi>&psi;</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>4</mn> <mi>&pi;</mi> </mfrac> <mo>&times;</mo> <mi>arctan</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, Ig(x,y,z)、Ib(x, y, z) represents the green and blue component values of the point cloud at the (x, y, z) point; psig(x, y, z) represents a color invariant value at the (x, y, z) point; when psig<TgWhen the point is a vegetation point, the point is represented as a vegetation point; when psig>TgWhen it is, it means that the point is a non-vegetation point.
7. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image as claimed in claim 1, wherein in the fourth step, the point cloud in the third step is segmented by using a region growing method based on plane fitting to obtain the point cloud region V of each buildingBiThe method specifically comprises the following steps:
(4.1) the points in the point cloud are divided into two types: if one point in the neighborhood of the point is default, the point belongs to the boundary point; otherwise such points belong to interior points;
(4.2) setting a certain interior point p0(x0,y0,z0),p0The eight neighborhood points of (a): { p1,p2,…,p8And fitting a plane equation (2) to the 9 points by using a least square method, and specifically realizing the following steps:
z=ax+by+c (2)
for point p0And 8 points within the neighborhood: (x)i,yi,zi) I is 0,1,2, …, 7; fitting calculation formula (2) mean
A surface equation that minimizes equation (3);
<math> <mrow> <mi>S</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>ax</mi> <mo>+</mo> <mi>by</mi> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
to minimize S, one should satisfy:
from this the following normal system of equations can be derived:
<math> <mrow> <mfenced open='|' close='|'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>x</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msup> <msub> <mi>y</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>a</mi> </mtd> </mtr> <mtr> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mi>&Sigma;</mi> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0,1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mn>7</mn> <mo>;</mo> <mi>n</mi> <mo>=</mo> <mn>8</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
(4.3) finding the SSD and the variance of any interior point in the 8 neighborhood according to the formula (4):
<math> <mrow> <mi>SSD</mi> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>M</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>h</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M is p0And its set of eight neighborhood points, hkAnd zkRespectively obtaining an observation elevation value and a fitting plane elevation value;
(4.4) sequencing each point in the point cloud in an ascending order according to the SSD value, and taking the point with the smallest SSD value as a seed point;
(4.5) calculating the distance h from each point to the plane where the seed points are located in the neighborhood of the seed points, if h is<hT,hTIf the elevation threshold is the elevation threshold, merging the elevation threshold into the same area, and marking the area; otherwise, this is a non-patch point;
and (4.6) when all the neighborhood points are traversed, finding unprocessed points from the new SSD value as new seed points, and repeating the operation of (4.5) until all the points are traversed.
8. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein in the fifth step, the point cloud area V of each building in the fourth stepBiDeleting wall points of the building, and finally obtaining rough contour information of the building by solving top surface boundary points, wherein the method specifically comprises the following steps:
(5.1) according to the divided surface patches in the step four, if the normal vector of the surface patch is parallel to the ground, the surface patch is indicated as a wall surface patch; if the normal vector of the patch is vertical to the ground, the patch is a roof patch; therefore, deleting the wall surface patches according to the normal vector;
(5.2) obtaining the boundary of the point cloud by utilizing an alpha-shape algorithm according to the top surface point cloud obtained in the step (5.1); the specific method comprises the following steps:
a) taking an arbitrary point p1 from the top surface point cloud, searching a point set p2 with a distance less than or equal to 2 alpha from the remaining points, wherein a is the radius of a circle, and setting p2 to { p21, p22, p23,.., p2n };
b) taking out a point p2i from any point p2, and finding out the circle center p0 of the points p1 and p2i by using the formulas (6) and (7); knowing two points (x1, y1), (x2, y2) and the radius α of the circle, the equation for the center of the circle (x0, y0) is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>1</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
it is difficult to directly solve the equation, so the distance intersection algorithm in mapping is used to obtain:
x 0 = x 1 + x 2 - x 1 2 + H ( y 2 - y 1 ) y 0 = y 1 + y 2 - y 1 2 + H ( x 2 - x 1 ) - - - ( 7 )
wherein, <math> <mrow> <mi>H</mi> <mo>=</mo> <msqrt> <mfrac> <msup> <mi>&alpha;</mi> <mn>2</mn> </msup> <msup> <mi>s</mi> <mn>2</mn> </msup> </mfrac> <mo>-</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> </msqrt> <mo>,</mo> <msup> <mi>S</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>-</mo> <mi>x</mi> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>1</mn> <mo>-</mo> <mi>y</mi> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
c) finding the distances l from all points to p0 from the point set p2, if l > alpha, then p1 and p2i are boundary points; if l < alpha, the next step d) is carried out;
d) repeating the three steps a) b) c) for other points in p2 until all points in p2 are judged to be finished.
9. The method for extracting the contour line of the building of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein in the sixth step, the top point cloud contour points of each building obtained in the fifth step are superposed on the image to obtain a building buffer omega on the imageBi(ii) a The specific process is as follows:
setting a projection matrix P3×4Representing single-view geometryKnowing the inside and outside orientation elements, calculate P3×4Matrix:
P = - f 0 x 0 0 f y 0 0 0 1 R T 1 0 0 - X s 0 1 0 - Y s 0 0 1 - Z s - - - ( 8 )
wherein P is the projection matrix P3×4F is the focal length of the image, x0And y0The eccentricity of the optical axis from the optical center in the horizontal direction and the vertical direction; xs、Ys、ZsAs the coordinates of the camera center in the world coordinate system, RTRepresents a 3 × 3 rotation matrix;
the projection formula is:
xz yz z = P X Y Z 1 - - - ( 9 )
calculating the projection of the point on the point cloud on the image by using the formula (8) and the formula (9); x, Y and Z represent the coordinates of the object point in the camera coordinate system, and X, Y and Z represent the coordinates of the object point in the world coordinate system.
10. The method for extracting the building contour line of the unmanned aerial vehicle multi-overlapping remote sensing image according to claim 1, wherein in the seventh step, the points overlapped on the image in the sixth step are subjected to boundary connection, the obtained building rough contour is overlapped on the spliced image as a buffer area, and meanwhile, the accurate contour of the building is evolved in the buffer area by using a level set algorithm by using the shape of the building rough contour as prior information; the method comprises the following specific steps:
(7.1) carrying out boundary tracking on the points superposed on the image to obtain a closed contour; expanding the contour to obtain a buffer region omega for extracting the contour of the building on the imageBi
(7.2) performing local level set evolution in the remote sensing image in the buffer area by taking the contour of the image area obtained according to the step (7.1) as an initial level set and taking the contour shape of the image area obtained according to the step (7.1) as prior information to obtain a fine contour of a building of the remote sensing image; the specific implementation is as follows:
first, the buffer area of each building on the image is preset to be omegaBiThe shape of the contour obtained in (7.1) is [ phi ]Pi,ФPiAs a prior shape, phiSiAs a target segmentation contour, an energy functional based on shape constraint is defined as follows:
Eto=E(c1,c2Si)+EshSiP1)(10)
the initial contour and the prior shape of the target are expressed by utilizing a moving target area:
<math> <mrow> <msub> <mi>E</mi> <mi>sh</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mo>,</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&Integral;</mo> <msub> <mi>&Omega;</mi> <mi>Bi</mi> </msub> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mi>si</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&phi;</mi> <mi>Pi</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>dxdy</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
the energy function of the prior art based on the level set C-V model is:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>E</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> </mtd> </mtr> <mtr> <mtd> <mi>&mu;</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> <mo>+</mo> <mi>&nu;</mi> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> <mi>dxdy</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
where H (φ) is a Heaviside function of the form: <math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>&phi;</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
(phi) is a Dirac function of the formu0(x, y) is the gray value of a certain point in the image area to be processed,modulus of the gradient of the current point, coefficient lambda1,λ2More than 0, mu, v is more than or equal to 0 as a fixed parameter, and generally, lambda is taken1=λ1, μ ═ 0, ν ═ 1, and (12) partial differential equations corresponding to the following equations:
<math> <mrow> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>&delta;</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mo>[</mo> <mi>&mu;</mi> <mo>&dtri;</mo> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&dtri;</mo> <mi>&phi;</mi> </mrow> <mrow> <mo>|</mo> <mo>&dtri;</mo> <mi>&phi;</mi> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mi>&nu;</mi> <mo>-</mo> <msub> <mi>&lambda;</mi> <mn>1</mn> </msub> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <msup> <mo>]</mo> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&lambda;</mi> <mn>2</mn> </msub> <mo>[</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mo>]</mo> <mn>2</mn> </msup> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the parameter c1、c2Obtained according to the following formula,
<math> <mrow> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mi>dxdy</mi> </mrow> </mfrac> </mrow> </math> <math> <mrow> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> <mrow> <msub> <mo>&Integral;</mo> <mi>&Omega;</mi> </msub> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>)</mo> </mrow> <mo>]</mo> <mi>dxdy</mi> </mrow> </mfrac> </mrow> </math>
according to the formula (13), a C-V level set evolution method is adopted to segment and extract the building region, and the image is taken as a building segmentation object ViSegThe C-V level set is prior art.
CN201510025503.9A 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane Active CN104484668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510025503.9A CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510025503.9A CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Publications (2)

Publication Number Publication Date
CN104484668A true CN104484668A (en) 2015-04-01
CN104484668B CN104484668B (en) 2017-11-10

Family

ID=52759209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510025503.9A Active CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Country Status (1)

Country Link
CN (1) CN104484668B (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN105046201A (en) * 2015-06-19 2015-11-11 东南大学 Method for fast identifying building image on the basis of morphology
CN105184854A (en) * 2015-08-24 2015-12-23 北京麦格天宝科技股份有限公司 Quick modeling method for cloud achievement data of underground space scanning point
CN105808926A (en) * 2016-03-02 2016-07-27 中国地质大学(武汉) Precondition conjugate gradient block adjustment method based on GPU (Graphics Processing Unit) parallel acceleration
CN105844707A (en) * 2016-03-15 2016-08-10 南京大学 Method for automatically extracting ancient city wall data from LiDAR point cloud data based on cross section of city wall
CN106023224A (en) * 2016-05-30 2016-10-12 天水师范学院 PCNN automatic segmentation method for microscopic image of traditional Chinese medicine
CN106023312A (en) * 2016-05-13 2016-10-12 南京大学 Automatic 3D building model reconstruction method based on aviation LiDAR data
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN106203335A (en) * 2016-07-11 2016-12-07 厦门大学 Sign board visibility evaluation methodology based on three-dimensional point cloud
CN106709897A (en) * 2016-12-28 2017-05-24 武汉大学 Method and system for searching optimal splicing lines among orthographic images based on gradient domain
CN107103603A (en) * 2017-05-18 2017-08-29 北京道亨时代科技有限公司 A kind of Objects extraction method of inclination measurement scene
CN107240128A (en) * 2017-05-09 2017-10-10 北京理工大学 A kind of X-ray film and photochrome method for registering based on contour feature
CN107393004A (en) * 2017-07-17 2017-11-24 北京数字绿土科技有限公司 A kind of method and device for obtaining building amount of demolition in power transmission line corridor
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107633523A (en) * 2016-07-18 2018-01-26 巧夺天宫(深圳)科技有限公司 Extraction construction characteristic line method and system based on a cloud
CN107741233A (en) * 2017-11-10 2018-02-27 邦鼓思电子科技(上海)有限公司 A kind of construction method of the outdoor map of three-dimensional
CN108088422A (en) * 2018-01-24 2018-05-29 成都纵横自动化技术有限公司 A kind of method for determining true Duplication
CN108571930A (en) * 2018-04-09 2018-09-25 刘默然 A kind of house measurement method using unmanned plane
CN108627112A (en) * 2018-05-09 2018-10-09 广州市杜格科技有限公司 Vehicle axis pin is away from dynamic measurement method
CN109146990A (en) * 2018-08-08 2019-01-04 广州市城市规划勘测设计研究院 A kind of calculation method for building profile
CN109242855A (en) * 2018-07-19 2019-01-18 中国科学院自动化研究所 Roof dividing method, system and equipment based on Three-dimensional Multi-resolution statistical information
CN109426697A (en) * 2017-08-30 2019-03-05 达索系统公司 For calculate meet the building of shade duration requirement building enclosure by the method implemented by computer
CN109584262A (en) * 2018-12-15 2019-04-05 中国科学院深圳先进技术研究院 Cloud detection method of optic, device and electronic equipment based on remote sensing image
CN109671109A (en) * 2018-12-25 2019-04-23 中国人民解放军61540部队 Point off density cloud generation method and system
CN110097588A (en) * 2019-04-22 2019-08-06 西安交通大学 A kind of repairing type edge extracting method of boat hair blade ceramic core point cloud model
CN110111349A (en) * 2019-04-22 2019-08-09 西安交通大学 A kind of non-rigid complex component high-precision edge extracting method based on cloud
CN110415337A (en) * 2019-07-12 2019-11-05 广州大学 A kind of building roof face method for reconstructing and device based on image linear feature
CN110543872A (en) * 2019-09-12 2019-12-06 云南省水利水电勘测设计研究院 unmanned aerial vehicle image building roof extraction method based on full convolution neural network
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud
CN110879978A (en) * 2019-11-13 2020-03-13 广西中煤地质有限责任公司 Building contour line automatic extraction method of unmanned aerial vehicle oblique photography three-dimensional model
CN110910387A (en) * 2019-10-09 2020-03-24 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN111209920A (en) * 2020-01-06 2020-05-29 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111322950A (en) * 2020-04-17 2020-06-23 易思维(杭州)科技有限公司 Method for positioning cylinder position by line structured light sensor and its use
CN111383355A (en) * 2020-04-03 2020-07-07 贝壳技术有限公司 Three-dimensional point cloud completion method and device and computer readable storage medium
CN111523391A (en) * 2020-03-26 2020-08-11 上海刻羽信息科技有限公司 Building identification method, system, electronic device and readable storage medium
CN111652241A (en) * 2020-02-17 2020-09-11 中国测绘科学研究院 Building contour extraction method fusing image features and dense matching point cloud features
CN111814715A (en) * 2020-07-16 2020-10-23 武汉大势智慧科技有限公司 Ground object classification method and device
CN112132029A (en) * 2020-09-23 2020-12-25 中国地震局地震预测研究所 Unmanned aerial vehicle remote sensing image rapid positioning method for earthquake emergency response
CN112146564A (en) * 2019-06-28 2020-12-29 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
CN112232248A (en) * 2020-10-22 2021-01-15 中国人民解放军战略支援部队信息工程大学 Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112241661A (en) * 2019-07-17 2021-01-19 临沂大学 Urban ground feature fine classification method combining airborne LiDAR point cloud data and aerial image
CN112748700A (en) * 2020-12-18 2021-05-04 深圳市显控科技股份有限公司 Numerical control code generation method and device, electronic equipment and storage medium
CN113296543A (en) * 2021-07-27 2021-08-24 成都睿铂科技有限责任公司 Method and system for planning aerial route
CN114782826A (en) * 2022-06-20 2022-07-22 绵阳天仪空间科技有限公司 Safety monitoring system and method for post-disaster building
CN115546421A (en) * 2022-12-02 2022-12-30 中南大学 Point-line double-domain mutual enhancement building geometric contour line reconstruction method
CN116229339A (en) * 2023-02-06 2023-06-06 南京航空航天大学 Ship lock water level detection method based on semantic segmentation and rough-to-fine strategy
CN117274605A (en) * 2023-11-20 2023-12-22 北京飞渡科技股份有限公司 Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN118212366A (en) * 2024-05-21 2024-06-18 中国科学院空天信息创新研究院 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009131108A1 (en) * 2008-04-23 2009-10-29 株式会社パスコ Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN103020342A (en) * 2012-12-04 2013-04-03 南京大学 Method for extracting contour and corner of building from ground LiDAR data
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009131108A1 (en) * 2008-04-23 2009-10-29 株式会社パスコ Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN103020342A (en) * 2012-12-04 2013-04-03 南京大学 Method for extracting contour and corner of building from ground LiDAR data
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809689A (en) * 2015-05-15 2015-07-29 北京理工大学深圳研究院 Building point cloud model and base map aligned method based on outline
CN104809689B (en) * 2015-05-15 2018-03-30 北京理工大学深圳研究院 A kind of building point cloud model base map method for registering based on profile
CN105046201A (en) * 2015-06-19 2015-11-11 东南大学 Method for fast identifying building image on the basis of morphology
CN105046201B (en) * 2015-06-19 2018-05-08 东南大学 One kind quickly knows method for distinguishing based on morphology to building object image
CN105184854A (en) * 2015-08-24 2015-12-23 北京麦格天宝科技股份有限公司 Quick modeling method for cloud achievement data of underground space scanning point
CN105184854B (en) * 2015-08-24 2018-01-16 北京麦格天宝科技股份有限公司 For the fast modeling method of underground space scanning element cloud performance data
CN105808926A (en) * 2016-03-02 2016-07-27 中国地质大学(武汉) Precondition conjugate gradient block adjustment method based on GPU (Graphics Processing Unit) parallel acceleration
CN105844707B (en) * 2016-03-15 2019-04-16 南京大学 The method for automatically extracting Old City Wall data from LiDAR point cloud data based on city wall section
CN105844707A (en) * 2016-03-15 2016-08-10 南京大学 Method for automatically extracting ancient city wall data from LiDAR point cloud data based on cross section of city wall
CN106023312A (en) * 2016-05-13 2016-10-12 南京大学 Automatic 3D building model reconstruction method based on aviation LiDAR data
CN106023312B (en) * 2016-05-13 2019-02-22 南京大学 Three-dimensional building object model automatic reconstruction method based on aviation LiDAR data
CN106023224A (en) * 2016-05-30 2016-10-12 天水师范学院 PCNN automatic segmentation method for microscopic image of traditional Chinese medicine
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN106203335A (en) * 2016-07-11 2016-12-07 厦门大学 Sign board visibility evaluation methodology based on three-dimensional point cloud
CN106203335B (en) * 2016-07-11 2019-05-28 厦门大学 Sign board visibility evaluation method based on three-dimensional point cloud
CN107633523A (en) * 2016-07-18 2018-01-26 巧夺天宫(深圳)科技有限公司 Extraction construction characteristic line method and system based on a cloud
CN107633523B (en) * 2016-07-18 2021-04-16 巧夺天宫(深圳)科技有限公司 Method and system for extracting building characteristic line based on point cloud
CN106709897A (en) * 2016-12-28 2017-05-24 武汉大学 Method and system for searching optimal splicing lines among orthographic images based on gradient domain
CN106709897B (en) * 2016-12-28 2019-11-26 武汉大学 Optimal splicing line finding method and system between orthography based on gradient field
CN107240128B (en) * 2017-05-09 2020-09-11 北京理工大学 X-ray and color photo registration method based on contour features
CN107240128A (en) * 2017-05-09 2017-10-10 北京理工大学 A kind of X-ray film and photochrome method for registering based on contour feature
CN107103603A (en) * 2017-05-18 2017-08-29 北京道亨时代科技有限公司 A kind of Objects extraction method of inclination measurement scene
CN107393004A (en) * 2017-07-17 2017-11-24 北京数字绿土科技有限公司 A kind of method and device for obtaining building amount of demolition in power transmission line corridor
CN109426697A (en) * 2017-08-30 2019-03-05 达索系统公司 For calculate meet the building of shade duration requirement building enclosure by the method implemented by computer
CN109426697B (en) * 2017-08-30 2024-03-22 达索系统公司 Computer-implemented method for calculating an envelope of a building
CN107514993A (en) * 2017-09-25 2017-12-26 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107514993B (en) * 2017-09-25 2019-11-05 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107741233A (en) * 2017-11-10 2018-02-27 邦鼓思电子科技(上海)有限公司 A kind of construction method of the outdoor map of three-dimensional
CN108088422A (en) * 2018-01-24 2018-05-29 成都纵横自动化技术有限公司 A kind of method for determining true Duplication
CN108571930A (en) * 2018-04-09 2018-09-25 刘默然 A kind of house measurement method using unmanned plane
CN108627112A (en) * 2018-05-09 2018-10-09 广州市杜格科技有限公司 Vehicle axis pin is away from dynamic measurement method
CN109242855A (en) * 2018-07-19 2019-01-18 中国科学院自动化研究所 Roof dividing method, system and equipment based on Three-dimensional Multi-resolution statistical information
CN109242855B (en) * 2018-07-19 2020-08-11 中国科学院自动化研究所 Multi-resolution three-dimensional statistical information-based roof segmentation method, system and equipment
CN109146990A (en) * 2018-08-08 2019-01-04 广州市城市规划勘测设计研究院 A kind of calculation method for building profile
CN109146990B (en) * 2018-08-08 2023-02-24 广州市城市规划勘测设计研究院 Building outline calculation method
CN109584262A (en) * 2018-12-15 2019-04-05 中国科学院深圳先进技术研究院 Cloud detection method of optic, device and electronic equipment based on remote sensing image
CN109671109A (en) * 2018-12-25 2019-04-23 中国人民解放军61540部队 Point off density cloud generation method and system
CN110097588A (en) * 2019-04-22 2019-08-06 西安交通大学 A kind of repairing type edge extracting method of boat hair blade ceramic core point cloud model
CN110111349A (en) * 2019-04-22 2019-08-09 西安交通大学 A kind of non-rigid complex component high-precision edge extracting method based on cloud
CN112146564B (en) * 2019-06-28 2022-04-15 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
CN112146564A (en) * 2019-06-28 2020-12-29 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
CN110415337A (en) * 2019-07-12 2019-11-05 广州大学 A kind of building roof face method for reconstructing and device based on image linear feature
CN110415337B (en) * 2019-07-12 2021-03-23 广州大学 Building roof reconstruction method and device based on image straight line features
CN112241661B (en) * 2019-07-17 2024-08-23 临沂大学 Urban ground object refined classification method combining airborne LiDAR point cloud data and aerial image
CN112241661A (en) * 2019-07-17 2021-01-19 临沂大学 Urban ground feature fine classification method combining airborne LiDAR point cloud data and aerial image
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud
CN110570428B (en) * 2019-08-09 2023-07-07 浙江合信地理信息技术有限公司 Method and system for dividing building roof sheet from large-scale image dense matching point cloud
CN110543872A (en) * 2019-09-12 2019-12-06 云南省水利水电勘测设计研究院 unmanned aerial vehicle image building roof extraction method based on full convolution neural network
CN110543872B (en) * 2019-09-12 2023-04-18 云南省水利水电勘测设计研究院 Unmanned aerial vehicle image building roof extraction method based on full convolution neural network
CN110910387B (en) * 2019-10-09 2022-03-04 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN110910387A (en) * 2019-10-09 2020-03-24 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN110879978A (en) * 2019-11-13 2020-03-13 广西中煤地质有限责任公司 Building contour line automatic extraction method of unmanned aerial vehicle oblique photography three-dimensional model
CN111209920A (en) * 2020-01-06 2020-05-29 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111209920B (en) * 2020-01-06 2022-09-23 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111652241A (en) * 2020-02-17 2020-09-11 中国测绘科学研究院 Building contour extraction method fusing image features and dense matching point cloud features
CN111523391A (en) * 2020-03-26 2020-08-11 上海刻羽信息科技有限公司 Building identification method, system, electronic device and readable storage medium
CN111383355A (en) * 2020-04-03 2020-07-07 贝壳技术有限公司 Three-dimensional point cloud completion method and device and computer readable storage medium
CN111322950B (en) * 2020-04-17 2021-08-17 易思维(杭州)科技有限公司 Method for positioning cylinder position by line structured light sensor and its use
CN111322950A (en) * 2020-04-17 2020-06-23 易思维(杭州)科技有限公司 Method for positioning cylinder position by line structured light sensor and its use
CN111814715B (en) * 2020-07-16 2023-07-21 武汉大势智慧科技有限公司 Ground object classification method and device
CN111814715A (en) * 2020-07-16 2020-10-23 武汉大势智慧科技有限公司 Ground object classification method and device
CN112132029A (en) * 2020-09-23 2020-12-25 中国地震局地震预测研究所 Unmanned aerial vehicle remote sensing image rapid positioning method for earthquake emergency response
CN112132029B (en) * 2020-09-23 2023-07-11 中国地震局地震预测研究所 Unmanned aerial vehicle remote sensing image rapid positioning method for earthquake emergency response
CN112232248A (en) * 2020-10-22 2021-01-15 中国人民解放军战略支援部队信息工程大学 Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112748700A (en) * 2020-12-18 2021-05-04 深圳市显控科技股份有限公司 Numerical control code generation method and device, electronic equipment and storage medium
CN113296543B (en) * 2021-07-27 2021-09-28 成都睿铂科技有限责任公司 Method and system for planning aerial route
CN113296543A (en) * 2021-07-27 2021-08-24 成都睿铂科技有限责任公司 Method and system for planning aerial route
CN114782826A (en) * 2022-06-20 2022-07-22 绵阳天仪空间科技有限公司 Safety monitoring system and method for post-disaster building
CN115546421B (en) * 2022-12-02 2023-03-24 中南大学 Point-line double-domain mutual enhancement building geometric contour line reconstruction method
CN115546421A (en) * 2022-12-02 2022-12-30 中南大学 Point-line double-domain mutual enhancement building geometric contour line reconstruction method
CN116229339A (en) * 2023-02-06 2023-06-06 南京航空航天大学 Ship lock water level detection method based on semantic segmentation and rough-to-fine strategy
CN117274605A (en) * 2023-11-20 2023-12-22 北京飞渡科技股份有限公司 Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN117274605B (en) * 2023-11-20 2024-03-01 北京飞渡科技股份有限公司 Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN118212366A (en) * 2024-05-21 2024-06-18 中国科学院空天信息创新研究院 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images
CN118212366B (en) * 2024-05-21 2024-08-13 中国科学院空天信息创新研究院 Moving target three-dimensional reconstruction method and device based on multiple remote sensing images

Also Published As

Publication number Publication date
CN104484668B (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN104484668B (en) A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
Zhou et al. Seamless fusion of LiDAR and aerial imagery for building extraction
Zhang et al. LOAM: Lidar odometry and mapping in real-time.
Chiabrando et al. UAV and RPV systems for photogrammetric surveys in archaelogical areas: two tests in the Piedmont region (Italy)
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN113436260A (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN111060924B (en) SLAM and target tracking method
CN109255808B (en) Building texture extraction method and device based on oblique images
CN104091369A (en) Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method
Qian et al. Robust visual-lidar simultaneous localization and mapping system for UAV
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
KR101549155B1 (en) Method of automatic extraction of building boundary from lidar data
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
Zakharov et al. Automatic building detection from satellite images using spectral graph theory
Bertram et al. Generation the 3D model building by using the quadcopter
Chaloeivoot et al. Building detection from terrestrial images
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Brunet et al. AI4GEO: A Path From 3D Model to Digital Twin
CN107784666B (en) Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images
Wang et al. Automated mosaicking of UAV images based on SFM method
Verhoeven Getting computer vision airborne: using structure from motion for accurate orthophoto production
Chen et al. True orthophoto generation using multi-view aerial images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant