CN104484668B - A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane - Google Patents

A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane Download PDF

Info

Publication number
CN104484668B
CN104484668B CN201510025503.9A CN201510025503A CN104484668B CN 104484668 B CN104484668 B CN 104484668B CN 201510025503 A CN201510025503 A CN 201510025503A CN 104484668 B CN104484668 B CN 104484668B
Authority
CN
China
Prior art keywords
mrow
msub
mtd
point
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510025503.9A
Other languages
Chinese (zh)
Other versions
CN104484668A (en
Inventor
眭海刚
涂继辉
贾曲
宋志娜
陈�光
徐川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201510025503.9A priority Critical patent/CN104484668B/en
Publication of CN104484668A publication Critical patent/CN104484668A/en
Application granted granted Critical
Publication of CN104484668B publication Critical patent/CN104484668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane, the method that dense Stereo Matching is combined using sky three generates three-dimensional point cloud, and is filtered processing to a cloud, detects building therefrom.After the building deletion metope of detection, building coarse contour is extracted from building Top-print information.Building coarse contour, by the use of building coarse contour as shape priors, is developed with level set algorithm in buffering area as on buffering area stacking splicing image, finally obtains building precise boundary.The present invention takes full advantage of the point cloud three-dimensional information of more superimposed image generations, in combination with the high-precision geological information of high-resolution remote sensing image, not only significantly improves the precision of contour of building extraction, and reduce the complexity of method.

Description

A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane
Technical field
The present invention relates to remote sensing image applied technical field, more particularly, to a kind of unmanned aerial vehicle remote sensing images contour of building Line drawing method.
Background technology
Building is a kind of important geographical space key element in city, and it is in city planning and administration, urban development and change Change and disaster detection has great importance with application fields such as assessments.Contour of building line drawing is city fundamental geological letter The foundation of breath system and renewal important step.Unmanned plane is as a kind of new remote sensing monitoring platform, flight operation intelligence Change degree is high, can be provided remote sensing monitoring data and low latitude video monitoring in real time by prebriefed pattern autonomous flight, shooting, had machine The features such as property is strong, convenient, cost is low is moved, the remotely-sensed data of the high-resolution overlay acquired in it has strong antijamming capability, into The features such as picture scope is big, makes one of effective data source of contour of building line drawing.
Contain largely abundant information in high-resolution remote sensing image, contour of building extraction suffer from it is various its Its clutter, such as building and non-building are distinguished, and trees are blocked around building, influence of highway sideline etc. Deng.Therefore, contour of building extraction only is carried out by single image, technical difficulty is very big.Contour of building extraction not only needs The extraction and analysis of remote sensing two-dimensional signal relied on, and also need to combine building three-dimensional information, so two and three dimensions are believed Breath merges and supplemented each other the extraction that the contour of building in remote sensing image would be more advantageous.Currently with high-definition remote sensing shadow Typical method as carrying out contour of building extraction includes following several:1) based on single high-resolution remote sensing image building Contour line extraction.Although high-resolution remote sensing image has clearly contour of building information, artificial building and Non- building is difficult to differentiate between coming, and the trees around building are blocked in addition also produces certain interference to the profile of building, Therefore this kind of method has certain limitation.2) the contour of building line drawing under being aided in based on shade.It is although auxiliary in shade Lower progress contour of building is helped to extract the indirect utilization elevation information of building, but the extraction of shade is without necessarily general Adaptive, and using shade try to achieve depth of building needs it is related parameter it is more, therefore such method be difficult meet it is actual Needs.3) the contour of building line drawing based on Lidar and remote sensing image.Although this kind of method both make use of the three of Lidar Information is tieed up, has merged the high-precision geometric profile information of image again, is complemented one another by the way that two kinds of data are good and bad to extract building Thing profile information.But this kind of method presence is the high registration accuracy difficulty of Lidar and remote sensing image, and Lidar data obtain The cost taken is also costly.4) the contour of building line drawing based on three-dimensional aviation image.Although this kind of method utilizes solid Matching obtains three-dimensional information, while make use of the high-precision two-dimensional signal of image, is built by the complementation of two category informations Build the extraction of thing profile information.But the problem of this kind of method is that stereogram breadth is smaller, built for extracting large-scale city Thing profile is built to have a certain impact.Therefore need urgently to find that a kind of data easily obtain, extracting method automaticity is high, extraction As a result it is relatively accurate high and meet the method for needs of production.
The content of the invention
The present invention takes full advantage of the point cloud three-dimensional information of more superimposed image generations, in combination with the height of remote sensing image itself Precision information, significantly improve the precision of contour of building extraction.
The technical scheme is that a kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane, including Following steps:
Step 1, adjustment is carried out using empty three pairs of unmanned aerial vehicle remote sensing images, while utilize the PMVS algorithms pair after GPU acceleration Image dense Stereo Matching, finally obtain the high intensive colour point clouds of precision;
Step 2, the unmanned aerial vehicle remote sensing images after adjustment are spliced;
Step 3, colour point clouds are filtered;Ground and non-ground are carried out first with improved morphologic filtering algorithm Separation, is then filtered out to the vegetation in ground point using Color invariants, is finally filtered out by the use of elevation and area as threshold value non- Building;
Step 4, utilize the building in region growing algorithm test point cloud;
Step 5, the metope of building is deleted, by the coarse contour information that building is finally obtained to top surface edge fitting;
Step 6, by the use of the building coarse contour that step 3 obtains as being superimposed upon on splicing image, form building wheel The buffering area of exterior feature extraction;
Step 7, while by the use of the shape of building coarse contour as prior information, level set algorithm is used in buffering area Develop and building precise boundary.
The step 1 comprises the following steps:
(1.1) unmanned aerial vehicle remote sensing images are folded to more apparent weights using prior information to pre-process:
(1.2) it is three photogrammetric that sky is carried out on the basis of step (1.1), net is hooked using sky three, obtains every image Elements of exterior orientation, and carry out the overall adjustment of flux of light method;
(1.3) it is grouped according to image, the PMVS algorithms accelerated on the basis of step (1..2) using GPU in the prior art Quick dense Stereo Matching is carried out, generates intensive three-dimensional point cloud, the point cloud rebuild is as three-dimensional elevation data.
The step 2 comprises the following steps:
(2.1) feature extraction:The feature extraction of image is carried out using SIFT;
(2.2) Image registration:Rough registration is first carried out, the characteristic point of matching is searched for using k-d tree;Then essence registration is carried out, Often there is the match point of mistake in rough registration, therefore the match point of mistake is rejected using RANSAC algorithms;By the registration of image, Obtain the transformation matrix between image;
(2.3) splicing of image:The transformation matrix obtained by (2.2) carries out the splicing of image;
(2.4) fusion of image:After splicing, the fusion of image is carried out using bilinear interpolation algorithm.
The step 3 comprises the following steps:
(3.1) ground point and non-ground points of a cloud are separated using improved morphologic filtering;
(3.2) to being filtered in the ground point in step (3.1) using Color invariants to vegetation;
(3.3) the characteristics of being based on building, non-building target point is fallen using threshold filtering.
The step (3.1) includes procedure below:
Take any one point and its neighborhood point to form the window of a fixed size first, pass through morphologic opening operation The minimum point in window is detected, if the height value of the point in window and the difference of minimum point height in threshold range, represent The point is ground point, is filtered with all points in this off-take point cloud;
Secondly according to y=2 × wk+ 1 obtains the window size needed for filtering next time, and the size of the window is less than default Filter window maximum, then carry out a morphologic filtering;Finally when window is more than preset window, end filtering;Wherein, k is Iterations, the size of a filter window before w.
The step (3.2) includes procedure below:
Because the point cloud generated by image dense Stereo Matching has colouring information, therefore using Color invariants theory to green Vegetation is filtered;The coordinate each put in cloud that sets up an office is that three passages of (x, y, z) color are (R, G, B), Color invariants Threshold value for vegetation is Tg, it is using green and the Color invariants formula that blue channel defines:
Wherein, Ig(x,y,z)、Ib(x, y, z) represents green and blue color component value of the point cloud in (x, y, z) point;
ψg(x, y, z) represents the Color invariants value in (x, y, z) point;Work as ψg< TgWhen, it is vegetation point to represent the point;When ψg> TgWhen, it is non-vegetation point to represent the point.
The step 4, using based on the point cloud in plane fitting region growth method segmentation step three, obtain each building The point cloud sector domain V of thingBi, specifically include following steps:
(4.1) point in a cloud is divided into two classes:If there is a point default in neighborhood of a point, this kind of point belongs to border Point;Otherwise this kind of point belongs to internal point;
(4.2) some internal point p is set0(x0,y0,z0),p0Eight neighborhood point be:{p1,p2,…,p8, this 9 points are utilized Least square fitting goes out plane equation (2), is implemented as follows:
Z=ax+by+c (2)
For point p0With 8 points in neighborhood:(xi,yi,zi), i=0,1,2 ..., 7;Fitting of calculation formula (2) is flat
Face equation, then make (3) formula minimum;
Make it that S is minimum, should meet:
It can thus be concluded that following normal equation group:
(4.3) variance and SSD of any internal point in 8 neighborhoods are tried to achieve according to formula (4):
Wherein M is p0With the set of its eight neighborhood point, hkAnd zkIt is observation height value and fit Plane height value respectively;
(4.4) ascending sort is carried out according to SSD values to each point in a cloud, takes the point of SSD values of minimum as seed point;
(4.5) each point is sought in seed vertex neighborhood to the distance h of plane where seed point, if h<hT,hTFor elevation threshold Value, then be merged into the same area, and the region is marked;Otherwise, this is non-dough sheet point;
(4.6) when neighborhood point all travel through, looked for from new SSD values it is untreated put as new seed point, repeatedly (4.5) operation, until all point traversals terminate.
The step 5, to the point cloud sector domain V of each building in step 4BiThe metope point of building is deleted, is passed through The coarse contour information that top surface boundary point finally obtains building is asked for, specifically includes following steps:
(5.1) dough sheet split according to step 4, if the normal vector of dough sheet, parallel to ground, explanation is metope dough sheet; If the normal vector of dough sheet is perpendicular to the ground, explanation is roof dough sheet;Therefore metope dough sheet is deleted according to normal vector;
(5.2) according to the top surface point cloud obtained in (5.1), the border of a cloud is obtained using alpha-shape algorithms;Specifically Method is:
A) any point p1 is taken out from top surface point cloud, detection range is less than or equal to 2 α point set p2, а from left point For round radius, if p2={ p21, p22, p23 ..., p2n };
B) appoint from p2 and take out a point p2i, the center of circle p0 of p1 and p2i points was obtained using formula (6) (7);Known 2 points (x1, y1), (x2, y2) and circle radius α, ask the center of circle (x0, y0) equation of the circle as follows:
Directly ask for that this equation is relatively difficult, therefore obtained using the algorithm that crosses of the distance in mapping science:
Wherein,S2=(x1-x2)2+(y1-y2)2
C) obtained from point set p2 a little to p0 distance l, if l>α, then p1 and p2i is boundary point;If l <α, it is transferred to next step d);
D) b) c) three steps are repeated a) to other points in p2, until institute a little all judges to terminate in p2.
The step 6, the top surface for the obtaining each building point cloud profile point in step 5 is superimposed upon on image, obtained Thing buffering area Ω is built in upper to imageBi;Detailed process is as follows:
If projection matrix P3×4The known internal and external orientation of single-view geometry is represented, calculates P3×4Matrix:
Wherein, P is projection matrix P3×4, f be image focal length, x0And y0It is in the horizontal direction and vertical for optical axis distance photocentre Nogata to eccentric throw;Xs、Ys、ZsThe coordinate for being image center in world coordinate system, RTRepresent 3 × 3 spin matrix;
Projection formula is:
The projection on the point to image on point cloud is calculated using formula (8) and formula (9);X, y, z represent object point in camera coordinates Coordinate under system, X, Y, Z represent the coordinate under the alive boundary's coordinate system of object point.
The step 7, the click-through row bound that step 6 is added on image connect, and obtained building coarse contour is made It is superimposed upon on splicing image for buffering area, while by the use of the shape of building coarse contour as prior information, is used in buffering area Level set algorithm, which develops, building precise boundary;Comprise the following steps that:
(7.1) the click-through row bound being superimposed upon on image is tracked, the profile closed;The profile is enlarged, Obtain the buffering area Ω that contour of building extracts on imageBi
(7.2) profile for obtaining imagery zone according to (7.1) obtains imagery zone as initial level collection according to (7.1) Contour shape local horizontal collection evolution is carried out in the remote sensing image inside buffering area as prior information, obtain remote sensing image Building fine definition;Specific implementation is as follows:
First, the buffering area for presetting each building on image is ΩBi, profile is obtained in (7.1) is shaped as ФPi, ФPiAs prior shape, ФSiAs Target Segmentation profile, the energy functional based on shape constraining is defined as follows:
Eto=E (c1,c2Si)+EshSiP1) (10)
The initial profile and prior shape of target are expressed using motion target area:
Energy function of the prior art based on level set C-V models be:
In formula, H (φ) is Heaviside functions, and its form is:
δ (φ) is Dirac functions, and its form isu0(x, y) is pending imagery zone certain point ash Angle value,For the mould of current point gradient, coefficient lambda1, λ2> 0, μ, v >=0 are preset parameter, typically take λ12=1, μ=0, v =1, partial differential equation corresponding to (12) formula:
Wherein, parameter c1、c2Obtained according to following formula,
According to formula (13), extraction construction zone is split using C-V level set movements method, as image
Building segmentation is to as viSeg, C-V level sets are prior art.
Compared with prior art, the present invention has advantages below and beneficial effect:
(1) although being single data source, two class data messages can be generated, has reached two class data sources and has carried Take the effect of contour of building.The complexity of method is reduced, also saves production cost.
(2) profile of thick building is obtained using the three-dimensional information of video generation three-dimensional point cloud, while also utilizes high-resolution The high-accurate outline information of image, both complement one another and merged, be contour of building extraction improve automaticity and Precision.
(3) as the prior shape information of level set movements and built by the use of the building coarse contour that three-dimensional point cloud information is extracted The buffering area of thing contours extract is built, ensure that speed and precision using level set movements contour of building.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is packet diagram of the image of A points in step 1 in the figure of air strips.
Embodiment
The present invention proposes a kind of contour of building line drawing method of unmanned aerial vehicle remote sensing images, and this method is first with photography The multiple view geometry stereo reconstruction in the algorithm combination computer vision of sky three in measurement quickly generates the three-dimensional with geographical coordinate Point cloud, then split the rough profile information for extracting building by three-dimensional point cloud, the rough profile information of building is established It is superimposed upon on high resolution image into buffering area, then by the use of rough profile as prior shape information, is utilized in buffering area Level set carries out evolution iteration, obtains the accurate geometric profile of building.Because three-dimensional point cloud is generated by image dense Stereo Matching, because This registration difficulty being not present between cloud and image.Technical solution of the present invention is described in detail below in conjunction with drawings and examples, Flow chart is as shown in figure 1, the technical scheme flow of embodiment comprises the following steps:
Step 1, adjustment is carried out using empty three pairs of unmanned aerial vehicle remote sensing images, while utilize the PMVS algorithms pair after GPU acceleration Image dense Stereo Matching, finally obtain precision height and the intensive colour point clouds with geographical coordinate.Specific implementation is as follows:
(1) unmanned aerial vehicle remote sensing images are folded to more apparent weights using prior information to pre-process:
There is certain degree of overlapping between the adjacent image of unmanned plane.Because the data volume taken photo by plane is very big, directly enter Row three-dimensional reconstruction, on the one hand can not obtain preferably rebuilding effect, still further aspect can to rebuild it is computationally intensive, during reconstruction Between it is longer.Therefore, image is grouped using existing POS information and air strips prior information.Due to unmanned plane image in implementation Ship's control be 80%, sidelapping degree is 35%, then should be with continuous 4 shadows of same air strips for certain image Continuous two images between picture and air strips are divided into one group.Packet of the image of A points as shown in Figure 2 in the figure of air strips, black square Shape dotted line frames part.
(2) it is photogrammetric that sky three is carried out on the basis of step (1), net is hooked using sky three, obtains the foreign side of every image Bit element, and carry out the overall adjustment of flux of light method.This step, which is realized, can use prior art, and it will not go into details by the present invention.
(3) it is grouped according to image, the PMVS algorithms accelerated on the basis of step (2) using GPU in the prior art are carried out Quick dense Stereo Matching, intensive three-dimensional point cloud is generated, the point cloud rebuild is as three-dimensional elevation data.Step 2, to adjustment Unmanned aerial vehicle remote sensing images afterwards are spliced.Idiographic flow is as follows:
(1) feature extraction:The feature extraction of image is carried out using SIFT.
(2) Image registration:Rough registration is first carried out, the characteristic point of matching is searched for using k-d tree;Then essence registration is carried out, slightly Often there is the match point of mistake in registration, therefore the match point of mistake is rejected using RANSAC algorithms., can by the registration of image To obtain the transformation matrix between image.
(3) splicing of image:The transformation matrix obtained by (2) carries out the splicing of image.
(4) fusion of image:After splicing, the fusion of image is carried out using bilinear interpolation algorithm.
Step 3, colour point clouds are filtered.Specific implementation is as follows:
(1) ground point and non-ground points of a cloud are separated using improved morphologic filtering.Its calculating process is: Take any one point and its neighborhood point to form the window of a fixed size first, exit window is detected by morphologic opening operation Intraoral minimum point, if the height value of point in window and the difference of minimum point height in threshold range, represent the point for ground Millet cake, it is filtered with all points in this off-take point cloud;Secondly according to y=2 × wk+ 1 (wherein k is iterations, before w once The size of filter window) the required window size of filtering next time is obtained, and the size of the window is less than default filter window most Big value, carrying out a morphologic filtering;Finally when window is more than preset window, end filtering.
(2) to being filtered in the ground point in step (1) using Color invariants to vegetation.Due to by intensive of image Point cloud with generation has colouring information, therefore green vegetation can be filtered using Color invariants theory.Set up an office cloud In the coordinate each put be that three passages of (x, y, z) color are (R, G, B), the threshold value for vegetation of Color invariants is Tg, It is using green and the Color invariants formula that blue channel defines:
Wherein, Ig(x,y,z)、Ib(x, y, z) represents green and blue color component value of the point cloud in (x, y, z) point.ψg(x,y, Z) the Color invariants value in (x, y, z) point is represented.Work as ψg< TgWhen, it is vegetation point to represent the point;Work as ψg> TgWhen, representing should Point is non-vegetation point.
(3) the characteristics of being based on building, such as 2 meters are highly not less than, area should not be less than 35 square meters.Utilize threshold value Filter out non-building target point.
Step 4, using based on the point cloud in plane fitting region growth method segmentation step three, obtain each building Point cloud sector domain VBi.Idiographic flow is as follows:
(1) point in a cloud is divided into two classes:If there is a point default in neighborhood of a point, this kind of point belongs to boundary point; Otherwise this kind of point belongs to internal point.
(2) some internal point p is set0(x0,y0,z0),p0Eight neighborhood point be:{p1,p2,…,p8}.To this 9 points using most Small square law fits plane equation (2), is implemented as follows:
Z=ax+by+c (2)
Wherein, the parameter of a, b, c expression (2) plane equation;For point p0With 8 points in neighborhood:
(xi,yi,zi), i=0,1,2 ..., 7;Fitting of calculation formula (2) plane equation, then make (3) formula minimum.
Make it that S is minimum, should meet:
It can thus be concluded that following normal equation group:
(3) variance and SSD of any internal point in 8 neighborhoods are tried to achieve according to formula (4):
Wherein M is p0With the set of its eight neighborhood point, hkAnd zkIt is observation height value and fit Plane height value respectively.
(4) ascending sort is carried out according to SSD values to each point in a cloud, takes the point of SSD values of minimum as seed point.
(5) each point is sought in seed vertex neighborhood to the distance h of plane where seed point, if h<hT, it is merged into same Region, and the region is marked.Otherwise, this is non-dough sheet point.Those skilled in the art can voluntarily preset respective threshold.
(6) when neighborhood point all travel through, looked for from new SSD values it is untreated put as new seed point, repeatedly (5) Operation, until all point traversals terminate.
Step 5, to the point cloud sector domain V of each building in step 4BiThe metope point of building is deleted, by asking for Top surface boundary point finally obtains the coarse contour information of building.Specific implementation process is as follows:
(1) dough sheet split according to step 4, if the normal vector of dough sheet, parallel to ground, explanation is metope dough sheet;Such as Perpendicular to the ground, explanation is roof dough sheet to the normal vector of fruit dough sheet.Therefore metope dough sheet is deleted according to normal vector.
(2) according to the top surface point cloud obtained in (1), the border of a cloud is obtained using alpha-shape algorithms.Specific method It is:A) any point p1 is taken out from top surface point cloud, detection range is less than or equal to the point of 2 α (α is the radius of circle) from left point Set p2, if p2=p21, p22, p23 ..., p2n };B) appoint from p2 and take out a point p2i, obtained using formula (6) (7) The center of circle p0 of p1 and p2i points;Known (x1, y1), (x2, y2) and circle radius α, seek the center of circle (x0, y0) equation of the circle at 2 points It is as follows:
Directly ask for that this equation is relatively difficult, therefore obtained using the algorithm that crosses of the distance in mapping science:
Wherein,S2=(x1-x2)2+(y1-y2)2
C) obtained from point set p2 a little to p0 distance l, if l>α, then p1 and p2i is boundary point;If l <α, it is transferred to next step d);D) b) c) three steps are repeated a) to other points in p2, until institute a little all judges to terminate in p2.
Step 6, the top surface for the obtaining each building point cloud profile point in step 5 is superimposed upon on image, obtains shadow Thing buffering area Ω is built in the upper of pictureBi.Because the building top surface boundary point in step 5 is to regard Stereo matching by image to rebuild more Obtain, thus these point project back to again on image only need using the projection matrix of 2 d-to-3 d as inverse operation i.e. Can.Idiographic flow is as follows:
If projection matrix P3×4The known internal and external orientation of single-view geometry is represented, calculates P3×4Matrix:
Wherein, P is projection matrix P3×4, f be image focal length, x0And y0It is in the horizontal direction and vertical for optical axis distance photocentre Nogata to eccentric throw.Xs、Ys、ZsThe coordinate for being image center in world coordinate system, RTRepresent 3 × 3 spin matrix.
Projection formula is:
The projection on the point to image on point cloud is calculated using formula (8) and formula (9).X, y, z represent object point in camera coordinates Coordinate under system, X, Y, Z represent the coordinate under the alive boundary's coordinate system of object point.
Step 7, the click-through row bound that step 6 is added on image connect, and obtained building coarse contour is as slow Area is rushed to be superimposed upon on splicing image, while by the use of the shape of building coarse contour as prior information, with level in buffering area Set algorithm, which develops, building precise boundary.Detailed process is as follows:
(1) the click-through row bound being superimposed upon on image is tracked, the profile closed.The profile is enlarged, obtained The buffering area Ω that contour of building extracts on to imageBi
(2) profile for obtaining imagery zone according to (1) obtains the profile of imagery zone as initial level collection according to (1) Shape carries out local horizontal collection evolution as prior information in the remote sensing image inside buffering area, obtains the building of remote sensing image The fine definition of thing.Specific implementation is as follows:
First, the buffering area for presetting each building on image is ΩBi, profile is obtained in (1) is shaped as ФPi, ФPi As prior shape, ФSiAs Target Segmentation profile, the energy functional based on shape constraining is defined as follows:
Eto=E (c1,c2Si)+EshSiP1) (10)
The initial profile and prior shape of target can be expressed using motion target area:
Energy function of the prior art based on level set C-V models be:
In formula, H(φ)It is Heaviside functions, its form is:
δ(φ)For Dirac functions, its form isu0(x,y)It is pending imagery zone certain point ash Angle value,For the mould of current point gradient, coefficient lambda1, λ2> 0, μ, v >=0 are preset parameter, and general recommendations takes λ12=1, μ= 0, v=1, partial differential equation corresponding to (12) formula:
Wherein, parameter c1、c2Obtained according to following formula,
According to formula (13), extraction construction zone is split using C-V level set movements method, as image
Building segmentation is to as viSeg, C-V level sets are prior art, and it will not go into details by the present invention.
For distance dBuffer1Corresponding position is v on remote sensing image after calamityiEstablish local buffer area viBuf, buffering area viBufImagery zone within profile is currently pending region Ω.dBuffer1Can be by those skilled in the art according to circumstances voluntarily Preset.

Claims (3)

1. the contour of building line drawing method of the how overlapping remote sensing image of a kind of unmanned plane, it is characterised in that comprise the following steps:
Step 1, adjustment is carried out using empty three pairs of unmanned aerial vehicle remote sensing images, while using the PMVS algorithms after GPU acceleration to image Dense Stereo Matching, finally obtain the high intensive colour point clouds of precision;
Step 2, the unmanned aerial vehicle remote sensing images after adjustment are spliced;
Step 3, colour point clouds are filtered;
Ground and non-surface separation are carried out first with improved morphologic filtering algorithm, then using Color invariants to ground point In vegetation filter out, finally filter out non-building as threshold value by the use of elevation and area;
Step 4, utilize the building in region growing algorithm test point cloud;
Step 5, the metope of building is deleted, by the coarse contour information that building is finally obtained to top surface edge fitting;
Step 6, the building coarse contour obtained using step 5 are superimposed upon on splicing image, form contour of building extraction Buffering area;
Step 7, while by the use of the shape of building coarse contour as prior information, developed in buffering area with level set algorithm Go out building precise boundary;
The step 3 comprises the following steps:
(3.1) ground point and non-ground points of a cloud are separated using improved morphologic filtering;
(3.2) ground point in step (3.1) is filtered using Color invariants to vegetation;
(3.3) the characteristics of being based on building, non-building target point is fallen using threshold filtering;
The step (3.1) includes procedure below:
Take any one point and its neighborhood point to form the window of a fixed size first, detected by morphologic opening operation The intraoral minimum point of exit window, if the height value of point in window and the difference of minimum point height represent the point in threshold range For ground point, it is filtered with all points in this off-take point cloud;
Secondly according to y=2 × wk+ 1 obtains the window size needed for filtering next time, and the size of the window is less than default filtering Window maximum, then carry out a morphologic filtering;Finally when window is more than preset window, end filtering;
Wherein, k is iterations, and w is the size of a preceding filter window;
The step (3.2) includes procedure below:
Because the point cloud generated by image dense Stereo Matching has colouring information, therefore using Color invariants theory to green vegetation Filtered;The coordinate each put in cloud that sets up an office is (x, y, z), and three passages of color are (R, G, B), and Color invariants are for planting The threshold value of quilt is Tg, it is using green and the Color invariants formula that blue channel defines:
<mrow> <msub> <mi>&amp;psi;</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>4</mn> <mi>&amp;pi;</mi> </mfrac> <mo>&amp;times;</mo> <mi>a</mi> <mi>r</mi> <mi>c</mi> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>I</mi> <mi>g</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mi>b</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, Ig(x,y,z)、Ib(x, y, z) represents green and blue color component value of the point cloud in (x, y, z) point;ψg(x, y, z) table Show the Color invariants value in (x, y, z) point;Work as ψg< TgWhen, it is vegetation point to represent the point;Work as ψg> TgWhen, represent that the point is Non- vegetation point;
The step 4, using based on the point cloud in plane fitting region growth method segmentation step three, obtain each building Point cloud sector domain VBi, specifically include following steps:
(4.1) point in a cloud is divided into two classes:If there is a point default in neighborhood of a point, this kind of point belongs to boundary point;It is no Then this kind of point belongs to internal point;
(4.2) some internal point p is set0(x0,y0,z0),p0Eight neighborhood point be:{p1,p2,…,p8, minimum is utilized to this 9 points Square law fits plane equation (2), is implemented as follows:
Z=ax+by+c (2)
For point p0With 8 points in neighborhood:(xi,yi,zi), i=0,1,2 ..., 7;Fitting of calculation formula (2) plane equation, then make (3) formula is minimum;
<mrow> <mi>S</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <msup> <mrow> <mo>(</mo> <mi>a</mi> <mi>x</mi> <mo>+</mo> <mi>b</mi> <mi>y</mi> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Make it that S is minimum, should meet:Wherein, the parameter of a, b, c expression (2) plane equation,WithRepresent S to a, b and c derivation respectively;
It can thus be concluded that following normal equation group:
<mrow> <mfenced open = "|" close = "|"> <mtable> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msup> <msub> <mi>x</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msup> <msub> <mi>y</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </mtd> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mtd> <mtd> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mi>a</mi> </mtd> </mtr> <mtr> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>&amp;Sigma;</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>...7</mn> <mo>;</mo> <mi>n</mi> <mo>=</mo> <mn>8</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
(4.3) variance and SSD of any internal point in 8 neighborhoods are tried to achieve according to formula (4):
<mrow> <mi>S</mi> <mi>S</mi> <mi>D</mi> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>M</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>h</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein M is p0With the set of its eight neighborhood point, hkAnd zkIt is observation height value and fit Plane height value respectively;
(4.4) ascending sort is carried out according to SSD values to each point in a cloud, takes the point of SSD values of minimum as seed point;
(4.5) each point is sought in seed vertex neighborhood to the distance h of plane where seed point, if h<hT,hTFor elevation threshold value, The same area is then merged into, and the region is marked;Otherwise, this is non-dough sheet point;
(4.6) when neighborhood point all travel through, looked for from new SSD values it is untreated put as new seed point, repeatedly (4.5) Operation, until all point traversals terminate;
The step 5, to the point cloud sector domain V of each building in step 4BiThe metope point of building is deleted, by asking for Top surface boundary point finally obtains the coarse contour information of building, specifically includes following steps:
(5.1) dough sheet split according to step 4, if the normal vector of dough sheet, parallel to ground, explanation is metope dough sheet;If Perpendicular to the ground, explanation is roof dough sheet to the normal vector of dough sheet;Therefore metope dough sheet is deleted according to normal vector;
(5.2) according to the top surface point cloud obtained in (5.1), the border of a cloud is obtained using alpha-shape algorithms;Specific method It is:
A) any point p1 is taken out from top surface point cloud, point set p2, α of the detection range less than or equal to 2 α are circle from left point Radius, if p2={ p21, p22, p23 ..., p2n };
B) appoint from p2 and take out a point p2i, the center of circle p0 of the circle by p1 and p2i points is sought using formula (6) (7);Known 2 points (x1, y1), (x2, y2) and circle radius α, ask the center of circle (x0, y0) equation of the circle as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>1</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&amp;alpha;</mi> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>0</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>&amp;alpha;</mi> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Directly ask for that this equation is relatively difficult, therefore obtained using the algorithm that crosses of the distance in mapping science:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mn>0</mn> <mo>=</mo> <mi>x</mi> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mn>0</mn> <mo>=</mo> <mi>y</mi> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mi>y</mi> <mn>2</mn> <mo>-</mo> <mi>y</mi> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> <mo>+</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>2</mn> <mo>-</mo> <mi>x</mi> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> 2
Wherein,S2=(x1-x2)2+(y1-y2)2
C) obtained from point set p2 a little to p0 distance l, if l>α, then p1 and p2i is boundary point;If l<α, It is transferred to next step d);
D) b) c) three steps are repeated a) to other points in p2, until institute a little all judges to terminate in p2;
The step 6, the top surface point cloud profile point of each building obtained in step 5 is superimposed upon on image, obtains shadow The buffering area Ω extracted as upper contour of buildingBi;Detailed process is as follows:
If projection matrix P3×4The known internal and external orientation of single-view geometry is represented, calculates P3×4Matrix:
<mrow> <mi>P</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mi>f</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>x</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>f</mi> </mtd> <mtd> <msub> <mi>y</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mi>R</mi> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>Y</mi> <mi>s</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>Z</mi> <mi>s</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Wherein, P is projection matrix P3×4, f be image focal length, x0And y0For optical axis distance photocentre both horizontally and vertically Eccentric throw;Xs、Ys、ZsThe coordinate for being image center in world coordinate system, RTRepresent 3 × 3 spin matrix;
Projection formula is:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mi>z</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mi>z</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mi>z</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>P</mi> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
The projection on the point to image on point cloud is calculated using formula (8) and formula (9);X, y, z represent object point under camera coordinates system Coordinate, X, Y, Z represent the alive boundary's coordinate system of object point under coordinate;
The step 7, the click-through row bound that step 6 is added on image are connected, and obtained building coarse contour is superimposed upon Buffering area is used as on splicing image, while by the use of the shape of building coarse contour as prior information, with level in buffering area Set algorithm, which develops, building precise boundary;Comprise the following steps that:
(7.1) the click-through row bound being superimposed upon on image is tracked, the profile closed;The profile is enlarged, obtained The buffering area Ω that contour of building extracts on imageBi
(7.2) profile for obtaining imagery zone according to (7.1) obtains the wheel of imagery zone as initial level collection according to (7.1) Profile shape carries out local horizontal collection evolution as prior information in the remote sensing image inside buffering area, obtains building for remote sensing image Build the fine definition of thing;Specific implementation is as follows:
First, the buffering area for presetting each building on image is ΩBi, profile is obtained in (7.1) is shaped as ФPi, ФPiMake For prior shape, ФSiAs Target Segmentation profile, the energy functional based on shape constraining is defined as follows:
Eto=E (c1,c2Si)+EshSiP1)(10)
The initial profile and prior shape of target are expressed using motion target area:
<mrow> <msub> <mi>E</mi> <mrow> <mi>s</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>&amp;phi;</mi> <mrow> <mi>s</mi> <mi>i</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>&amp;phi;</mi> <mrow> <mi>P</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>&amp;Integral;</mo> <msub> <mi>&amp;Omega;</mi> <mrow> <mi>B</mi> <mi>i</mi> </mrow> </msub> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;phi;</mi> <mrow> <mi>s</mi> <mi>i</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>&amp;phi;</mi> <mrow> <mi>P</mi> <mi>i</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Energy function based on level set C-V models is:
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&amp;lambda;</mi> <mn>1</mn> </msub> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>u</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> <mo>+</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>u</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> <mo>+</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>&amp;mu;</mi> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> <mo>+</mo> <mi>v</mi> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mo>|</mo> <mrow> <mo>&amp;dtri;</mo> <mi>&amp;phi;</mi> </mrow> <mo>|</mo> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> 3
In formula, H (φ) is Heaviside functions, and its form is:δ (φ) is Dirac functions, its form Foru0(x, y) is pending imagery zone certain point gray value,For the mould of current point gradient, coefficient lambda1, λ2> 0, μ, ν >=0 are preset parameter, typically take λ12=1, μ=0, ν=1, partial differential equation corresponding to (12) formula:
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>&amp;phi;</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>t</mi> </mrow> </mfrac> <mo>=</mo> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mo>&amp;lsqb;</mo> <mi>&amp;mu;</mi> <mo>&amp;dtri;</mo> <mo>&amp;CenterDot;</mo> <mfrac> <mrow> <mo>&amp;dtri;</mo> <mi>&amp;phi;</mi> </mrow> <mrow> <mo>|</mo> <mrow> <mo>&amp;dtri;</mo> <mi>&amp;phi;</mi> </mrow> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mi>v</mi> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>1</mn> </msub> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
Wherein, parameter c1、c2Obtained according to following formula,
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> </mrow> <mrow> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> </mrow> </mfrac> </mrow> </mtd> <mtd> <mrow> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;lsqb;</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> </mrow> <mrow> <msub> <mo>&amp;Integral;</mo> <mi>&amp;Omega;</mi> </msub> <mo>&amp;lsqb;</mo> <mn>1</mn> <mo>-</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mi>d</mi> <mi>x</mi> <mi>d</mi> <mi>y</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced>
According to formula (13), extraction construction zone is split using C-V level set movements method, the building segmentation pair as image As viSeg
2. a kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane according to claim 1, it is special Sign is that the step 1 comprises the following steps:
(1.1) pre-processed using prior information remote sensing image how overlapping to unmanned plane:
(1.2) it is photogrammetric that sky three is carried out on the basis of step (1.1), net is hooked using sky three, obtains the foreign side of every image Bit element, and carry out the overall adjustment of flux of light method;
(1.3) it is grouped according to image, the PMVS algorithms accelerated on the basis of step (1.2) using GPU carry out quickly intensive Matching, generates intensive three-dimensional point cloud, the three-dimensional point cloud generated is as three-dimensional elevation data.
3. a kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane according to claim 1, it is special Sign is that the step 2 comprises the following steps:
(2.1) feature extraction:The feature extraction of image is carried out using SIFT;
(2.2) Image registration:Rough registration is first carried out, the characteristic point of matching is searched for using k-d tree;Then essence registration is carried out, is slightly matched somebody with somebody Often there is the match point of mistake in standard, therefore the match point of mistake is rejected using RANSAC algorithms;By the registration of image, obtain Transformation matrix between image;
(2.3) splicing of image:The transformation matrix obtained by (2.2) carries out the splicing of image;
(2.4) fusion of image:After splicing, the fusion of image is carried out using bilinear interpolation algorithm.
CN201510025503.9A 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane Active CN104484668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510025503.9A CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510025503.9A CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Publications (2)

Publication Number Publication Date
CN104484668A CN104484668A (en) 2015-04-01
CN104484668B true CN104484668B (en) 2017-11-10

Family

ID=52759209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510025503.9A Active CN104484668B (en) 2015-01-19 2015-01-19 A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane

Country Status (1)

Country Link
CN (1) CN104484668B (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809689B (en) * 2015-05-15 2018-03-30 北京理工大学深圳研究院 A kind of building point cloud model base map method for registering based on profile
CN105046201B (en) * 2015-06-19 2018-05-08 东南大学 One kind quickly knows method for distinguishing based on morphology to building object image
CN105184854B (en) * 2015-08-24 2018-01-16 北京麦格天宝科技股份有限公司 For the fast modeling method of underground space scanning element cloud performance data
CN105808926B (en) * 2016-03-02 2017-10-03 中国地质大学(武汉) A kind of pre-conditional conjugate gradient block adjustment method accelerated parallel based on GPU
CN105844707B (en) * 2016-03-15 2019-04-16 南京大学 The method for automatically extracting Old City Wall data from LiDAR point cloud data based on city wall section
CN106023312B (en) * 2016-05-13 2019-02-22 南京大学 Three-dimensional building object model automatic reconstruction method based on aviation LiDAR data
CN106023224A (en) * 2016-05-30 2016-10-12 天水师范学院 PCNN automatic segmentation method for microscopic image of traditional Chinese medicine
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN106203335B (en) * 2016-07-11 2019-05-28 厦门大学 Sign board visibility evaluation method based on three-dimensional point cloud
CN107633523B (en) * 2016-07-18 2021-04-16 巧夺天宫(深圳)科技有限公司 Method and system for extracting building characteristic line based on point cloud
CN106709897B (en) * 2016-12-28 2019-11-26 武汉大学 Optimal splicing line finding method and system between orthography based on gradient field
CN107240128B (en) * 2017-05-09 2020-09-11 北京理工大学 X-ray and color photo registration method based on contour features
CN107103603A (en) * 2017-05-18 2017-08-29 北京道亨时代科技有限公司 A kind of Objects extraction method of inclination measurement scene
CN107393004A (en) * 2017-07-17 2017-11-24 北京数字绿土科技有限公司 A kind of method and device for obtaining building amount of demolition in power transmission line corridor
EP3451203A1 (en) * 2017-08-30 2019-03-06 Dassault Systèmes Computer-implemented method for computing an envelope for a building complying with shadow duration requirements
CN107514993B (en) * 2017-09-25 2019-11-05 同济大学 The collecting method and system towards single building modeling based on unmanned plane
CN107741233A (en) * 2017-11-10 2018-02-27 邦鼓思电子科技(上海)有限公司 A kind of construction method of the outdoor map of three-dimensional
CN108088422B (en) * 2018-01-24 2020-12-15 成都纵横自动化技术股份有限公司 Method for determining real overlapping rate
CN108571930A (en) * 2018-04-09 2018-09-25 刘默然 A kind of house measurement method using unmanned plane
CN108627112A (en) * 2018-05-09 2018-10-09 广州市杜格科技有限公司 Vehicle axis pin is away from dynamic measurement method
CN109242855B (en) * 2018-07-19 2020-08-11 中国科学院自动化研究所 Multi-resolution three-dimensional statistical information-based roof segmentation method, system and equipment
CN109146990B (en) * 2018-08-08 2023-02-24 广州市城市规划勘测设计研究院 Building outline calculation method
CN109584262A (en) * 2018-12-15 2019-04-05 中国科学院深圳先进技术研究院 Cloud detection method of optic, device and electronic equipment based on remote sensing image
CN109671109B (en) * 2018-12-25 2021-05-07 中国人民解放军61540部队 Dense point cloud generation method and system
CN110111349B (en) * 2019-04-22 2020-12-08 西安交通大学 Non-rigid body complex component high-precision edge extraction method based on point cloud
CN110097588B (en) * 2019-04-22 2021-01-15 西安交通大学 Shaping edge extraction method for aviation blade ceramic core point cloud model
CN112146564B (en) * 2019-06-28 2022-04-15 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
CN110415337B (en) * 2019-07-12 2021-03-23 广州大学 Building roof reconstruction method and device based on image straight line features
CN112241661A (en) * 2019-07-17 2021-01-19 临沂大学 Urban ground feature fine classification method combining airborne LiDAR point cloud data and aerial image
CN110570428B (en) * 2019-08-09 2023-07-07 浙江合信地理信息技术有限公司 Method and system for dividing building roof sheet from large-scale image dense matching point cloud
CN110543872B (en) * 2019-09-12 2023-04-18 云南省水利水电勘测设计研究院 Unmanned aerial vehicle image building roof extraction method based on full convolution neural network
CN110910387B (en) * 2019-10-09 2022-03-04 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN110879978A (en) * 2019-11-13 2020-03-13 广西中煤地质有限责任公司 Building contour line automatic extraction method of unmanned aerial vehicle oblique photography three-dimensional model
CN111209920B (en) * 2020-01-06 2022-09-23 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111652241B (en) * 2020-02-17 2023-04-28 中国测绘科学研究院 Building contour extraction method integrating image features and densely matched point cloud features
CN111523391B (en) * 2020-03-26 2021-11-05 上海刻羽信息科技有限公司 Building identification method, system, electronic device and readable storage medium
CN111383355B (en) * 2020-04-03 2023-06-27 如你所视(北京)科技有限公司 Three-dimensional point cloud completion method, device and computer readable storage medium
CN111322950B (en) * 2020-04-17 2021-08-17 易思维(杭州)科技有限公司 Method for positioning cylinder position by line structured light sensor and its use
CN111814715B (en) * 2020-07-16 2023-07-21 武汉大势智慧科技有限公司 Ground object classification method and device
CN112132029B (en) * 2020-09-23 2023-07-11 中国地震局地震预测研究所 Unmanned aerial vehicle remote sensing image rapid positioning method for earthquake emergency response
CN112232248B (en) * 2020-10-22 2023-04-07 中国人民解放军战略支援部队信息工程大学 Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112748700A (en) * 2020-12-18 2021-05-04 深圳市显控科技股份有限公司 Numerical control code generation method and device, electronic equipment and storage medium
CN113296543B (en) * 2021-07-27 2021-09-28 成都睿铂科技有限责任公司 Method and system for planning aerial route
CN114782826B (en) * 2022-06-20 2022-11-18 绵阳天仪空间科技有限公司 Safety monitoring system and method for post-disaster building
CN115546421B (en) * 2022-12-02 2023-03-24 中南大学 Point-line double-domain mutual enhancement building geometric contour line reconstruction method
CN117274605B (en) * 2023-11-20 2024-03-01 北京飞渡科技股份有限公司 Method and device for extracting water area outline from photo shot by unmanned aerial vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009131108A1 (en) * 2008-04-23 2009-10-29 株式会社パスコ Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN103020342A (en) * 2012-12-04 2013-04-03 南京大学 Method for extracting contour and corner of building from ground LiDAR data
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009131108A1 (en) * 2008-04-23 2009-10-29 株式会社パスコ Building roof outline recognizing device, building roof outline recognizing method, and building roof outline recognizing program
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN103020342A (en) * 2012-12-04 2013-04-03 南京大学 Method for extracting contour and corner of building from ground LiDAR data
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method

Also Published As

Publication number Publication date
CN104484668A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
CN104484668B (en) A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN107451982B (en) High-canopy-density forest stand crown area acquisition method based on unmanned aerial vehicle image
CN112686935B (en) Airborne sounding radar and multispectral satellite image registration method based on feature fusion
CN103426200B (en) Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN103218787B (en) Multi-source heterogeneous remote sensing image reference mark automatic acquiring method
CN107063228A (en) Targeted attitude calculation method based on binocular vision
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
CN106530337A (en) Non local stereopair dense matching method based on image gray scale guiding
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN111998862B (en) BNN-based dense binocular SLAM method
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
CN113012293A (en) Stone carving model construction method, device, equipment and storage medium
CN113129449B (en) Vehicle pavement feature recognition and three-dimensional reconstruction method based on binocular vision
CN106355643A (en) Method for generating three-dimensional real scene road model of highway
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
Özdemir et al. Classification of aerial point clouds with deep learning
Gomez-Donoso et al. Three-dimensional reconstruction using SFM for actual pedestrian classification
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN107784666B (en) Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant