CN104751451A - Dense point cloud extracting method of low-altitude high resolution image based on UAV (Unmanned Aerial Vehicle) - Google Patents

Dense point cloud extracting method of low-altitude high resolution image based on UAV (Unmanned Aerial Vehicle) Download PDF

Info

Publication number
CN104751451A
CN104751451A CN201510098230.0A CN201510098230A CN104751451A CN 104751451 A CN104751451 A CN 104751451A CN 201510098230 A CN201510098230 A CN 201510098230A CN 104751451 A CN104751451 A CN 104751451A
Authority
CN
China
Prior art keywords
point
image
cloud
daisy
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510098230.0A
Other languages
Chinese (zh)
Other versions
CN104751451B (en
Inventor
张绍明
陈宏敏
桂坡坡
骆遥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201510098230.0A priority Critical patent/CN104751451B/en
Publication of CN104751451A publication Critical patent/CN104751451A/en
Application granted granted Critical
Publication of CN104751451B publication Critical patent/CN104751451B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a dense point cloud extracting method of a low-altitude high resolution image based on a UAV (Unmanned Aerial Vehicle). The method comprises the following steps: respectively pairing N+1 pairing images and a reference image to form a three-dimensional image pair; taking the three-dimensional image pair as a unit, traversing all pixel points in the reference image, and searching a corresponding epipolar line space in the corresponding pairing image by adopting an irregular interval depth sampling algorithm; respectively calculating the Daisy description of corresponding pixel point of each point in the reference image in the corresponding epipolar line space to obtain a homonymy point set by adopting a Daisy algorithm; extracting a dense three-dimensional point cloud corresponding to each three-dimensional image; traversing the pairing images, and extracting N-1 sets of three-dimensional cloud sets; and obtaining a final dense three-dimensional cloud point by using a forced continuity checking algorithm. Compared with the prior art, the dense point cloud extracting method provided by the invention is faster in speed and stronger in reliability and executing efficiency, is very applicable to the correlation measurement among the dense point clouds, and is capable of effectively extracting a low-altitude compact three-dimensional point cloud based on the UAV.

Description

Based on the point of density cloud extracting method of unmanned plane low latitude high resolution image
Technical field
The present invention relates to a kind of data reduction method, especially relate to a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image.
Background technology
The digital elevation model (DEM) of earth surface area is rebuild and digital surface model (DSM) always is aerophotogrammetric main task and research emphasis based on aviation or satellite image.With the development of unmanned air vehicle technique and relaxing gradually of domestic low latitude Control Policy, using the carrying platform of unmanned plane as sensor, below km, low-latitude flying obtains the work pattern that boat takes the photograph data and is adopted by increasing domestic enterprise and R&D institution.Adopt this work pattern both can give full play to the high maneuverability of unmanned plane, the shortcoming of the atural object detailed information disappearance that large aircraft and satellite cause because flying height is too high can be made up again.
But due to domestic not for the point of density cloud extracting method of low latitude unmanned plane high resolution image at present, when other data reduction method existing is applied to this problem, usefulness is not good, all inapplicable.Therefore solution general at present adopts UAV flight's laser scanner to obtain point of density cloud in low air to surface scanning, the smart method for registering of a kind of ground laser point cloud and unmanned plane image reconstruction point cloud disclosed in Chinese patent CN103426165A, prior art extracts some cloud road vectors line, through process and Remote Sensing Image Matching according to laser point cloud data.But laser scanner is high cost compared with slr camera, be not suitable for promoting.
Summary of the invention
Object of the present invention be exactly in order to overcome above-mentioned prior art exist defect and a kind of low cost, high-level efficiency, the highly reliable point of density cloud extracting method based on unmanned plane low latitude high resolution image are provided.
Object of the present invention can be achieved through the following technical solutions:
Based on a point of density cloud extracting method for unmanned plane low latitude high resolution image, comprise the following steps:
1) open in unmanned plane image sequence at N, select an image as reference map, all the other are as pairing image;
2) N-1 is opened pairing image to match with reference images respectively, composition stereogram;
3) in units of a stereogram, all pixels in traversal reference map, adopt unequal interval depth-sampling algorithm to each pixel, search for the corresponding epipolar line space in corresponding pairing image;
4) adopt Daisy algorithm to calculate the Daisy description amount of each point and corresponding pixel points in reference map in corresponding epipolar line space respectively, and distribute according to the correlation probabilities that formula (1) calculates point-to-point transmission, obtain point set of the same name:
P ( d ) = 1 Z exp ( - | | D base - D match | | 2 σ ) - - - ( 1 )
Wherein, Z is normalization coefficient, and σ represents the variance that correlation probabilities distributes, D basefor the Daisy description amount of pixel in reference map, D matchfor the Daisy description amount of pairing image corresponding epipolar line mid point;
5) the inside and outside element of orientation utilizing image corresponding and point set of the same name, extract the dense three-dimensional point cloud that each stereogram is corresponding;
6) traversal pairing image, repeats 3)-5), extract N-1 cover three-dimensional point and converge;
7) utilize pressure continuity to check algorithm and reject outlier, obtain final dense three-dimensional point cloud.
Described step 3) in, unequal interval depth-sampling algorithm is: according to depth range and the sampling interval in internal and external orientation corresponding to each image in the pixel coordinate of impact point in reference map, stereogram, stereogram corresponding ground region, obtain corresponding epipolar line space corresponding with described impact point in corresponding pairing image.
In described Daisy algorithm, key point direction is only along the direction perpendicular to corresponding epipolar line.
In described Daisy algorithm, optimum configurations is as follows:
The radius of action R=8 of descriptor, subarea number of plies Q=2, every straton district number T=4, gradient direction number H=4, participate in the subarea number S=9 of computing, the final dimension Ds=36 of descriptor.
Described step 7) in, force continuity to check algorithm concrete steps as follows;
701) obtain N-1 cover three-dimensional point to converge, setting limit difference limen value error_threshold and continuity threshold value c_threshold;
702) choose wherein a sets of data as reference point cloud data set B;
703) another set of data except benchmark cloud data are chosen as checking data set C i;
704) travel through reference point cloud data set and have a cloud with the institute checking data centralization, according to formula (2) respectively Calculation Basis cloud data concentrate the deviate e of each point, as e<error_threshold, the continuity of this some correspondence counting is added one;
e = ( x B - x C i ) 2 + ( y B - y C i ) 2 + ( z B - z C i ) 2 - - - ( 2 )
705) according to step 703), 704) travel through all three dimensional point clouds;
706) retain the three-dimensional point that all continuitys counting is less than c_threshold, the point these remained exports as final three-dimensional point cloud.
Compared with prior art, the present invention has the following advantages:
1, the present invention uses unequal interval deep space sampling algorithm to improve the speed of core line search algorithm.
2, the Daisy algorithm after the present invention uses improvement calculates the correlativity between pixel, has stronger reliability and execution efficiency, and the correlativity be highly suitable between point of density cloud measures.
3, the present invention uses slr camera just can realize the extraction of point of density cloud, with UAV flight's laser scanner low air to surface scan obtain point of density cloud method compared with, cost reduces greatly.
Accompanying drawing explanation
Fig. 1 is schematic flow sheet of the present invention;
Fig. 2 has rotated the local pixel matrix schematic diagram to principal direction in SIFT descriptor principle;
In Fig. 2, the coverage of circle represents a circular gaussian filtering mask; Each blockage represents a pixel region; The direction of arrow in figure and length represent gradient direction and the gradient magnitude of corresponding pixel points in grid;
Fig. 3 be in SIFT descriptor principle each subarea at the weighted gradient accumulative histogram of eight different directions;
In Fig. 3, each grid covers the region of 4*4 size, and the arrow points in figure represents corresponding gradient direction, arrow length to represent in this subarea a little position gradient weighted sum in the direction in which;
Fig. 4 is the schematic shapes of Daisy descriptor;
In Fig. 4, circle represents the subarea scope in key point neighborhood; + represent the center position in current subarea; The radius size in subarea equals the standard deviation size acting on Gaussian convolution core in this region, subarea around key point can be divided into some layers using subarea radius as foundation, the subarea being positioned at same layer represents with same color, otherwise then represents by different colors; The subarea central point being positioned at identical layer is identical to the distance of key point, and subarea radius is identical; From key point more away from subarea radius namely larger also the Gaussian convolution core standard deviation of its correspondence is larger; Namely the some position that in figure, direction-j marks is key point position.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.The present embodiment is implemented premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
As shown in Figure 1, the present embodiment provides a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image, comprises the following steps:
Step S01, obtains the inside and outside element of orientation of unmanned plane image sequence;
Step S02, opens in unmanned plane image sequence at N, selects an image as reference map, and all the other are as pairing image;
Step S03, opens pairing image and matches with reference images respectively, composition stereogram by N-1;
Step S04, chooses a stereogram, all pixels in traversal reference map, adopts unequal interval depth-sampling algorithm, search for the corresponding epipolar line space in corresponding pairing image to each pixel;
Step S05, adopts Daisy algorithm to calculate the Daisy description amount of each some corresponding pixel points in reference map in corresponding epipolar line space respectively, and calculates the correlation probabilities distribution of point-to-point transmission according to formula (1), obtains point set of the same name:
P ( d ) = 1 Z exp ( - | | D x i - D x &prime; ( d ) | | 2 &sigma; ) - - - ( 1 )
Wherein, Z is normalization coefficient, and σ represents the variance that correlation probabilities distributes, D basefor the Daisy description amount of pixel in reference map, D matchfor the Daisy description amount of pairing image corresponding epipolar line mid point;
Step S06, the inside and outside element of orientation utilizing image corresponding and point set of the same name, extract the dense three-dimensional point cloud that each stereogram is corresponding;
Step S07, it is complete whether all stereograms travel through, and if so, then performs step S09, if not, then performs step S08;
Step S08, obtains next stereogram, returns step S04;
Step S09, extracts N-1 cover three-dimensional point and converges, and utilizes and forces continuity to check algorithm rejecting outlier;
Step S10, obtains final dense three-dimensional point cloud.
The committed step of said method specifically describes as follows:
1, unequal interval depth-sampling algorithm
This method adopts the corresponding epipolar line space that unequal interval deep space sampling algorithm search impact point is corresponding, has a distinct increment with traditional core line search algorithm phase specific rate.Unequal interval deep space sampling algorithm is summarized as follows:
Function: in a stereogram, the inside and outside element of orientation of pixel coordinate X and two image of known left certain point and the depth range in Target scalar region, generate the corresponding epipolar line space corresponding with left X point in right.
General introduction:
If the elements of interior orientation of a corresponding camera of image is K, elements of exterior orientation is R, and the coordinate of this camera under world coordinate system is C.So just can to set up under world coordinate system a bit and this corresponding relation on image between picture point, as follows:
λx=KR(X-C) (1)
Wherein, the photographic coordinate of object space point X on image is represented; λ represents this depth value relative to camera plane.
Utilize formula (1) reverse can go out object space point three-dimensional coordinate corresponding with picture point x on any degree of depth λ, as shown in formula (2).
X(λ)=λR TK -1x+C (2)
Suppose to be erected at camera on diverse location to object point X imaging with two now.The parameter of two cameras is respectively (K 0, R 0, C 0) and (K 1, R 1, C 1), wherein K represents camera internal position element, and R represents the rotation matrix of camera, and C represents the coordinate of camera under world coordinate system.The picture point of object point X on left is x l, in order to obtain the point of corresponding epipolar line at equal intervals corresponding with xl in certain limit in right, need left photographic light flux carries out unequal interval sampling, suppose that the left depth interval that certain is sampled is d λ, core line is spaced apart r, and right depth interval is d ω.Then can obtain formula (3) in conjunction with formula (2):
( &omega; + d&omega; ) u + rdu v + rdv 1 = ( &lambda; + d&lambda; ) a + b - - - ( 3 )
Wherein ω represents the depth range of object space sampled point apart from right camera; [u v 1] trepresent the picpointed coordinate of object space sampled point on right, a = a 0 a 1 a 2 T = K 1 R 1 R 0 T K 0 - 1 x , b = b 0 b 1 b 2 T = K 1 R 1 ( C 0 - C 1 ) .
If the depth range of object space point is [λ min, λ max], its initial, termination coordinate corresponding to right core line direction is (u min, v min), (u max, v max).Coordinate axis corresponding to core line sampling interval is spaced apart du, dv.Then can obtain following relation (4):
dl = ( u max - u min ) 2 + ( v max - v min ) 2
du=(u max-u min)/dl (4)
dv=(v max-v min)/dl
In conjunction with (3), (4) Shi Ke get (5):
ad&lambda; = &omega; rdu rdv 0 + d&omega; u + rdu v + rdv 1 - - - ( 5 )
Arrange (5) Shi Ke get (6):
d&lambda; = &omega;rdu a 0 - a 2 ( u + rdu ) = &omega;rdv a 1 - a 2 ( v + rdv ) - - - ( 6 )
Just unequal interval sampling can be carried out to the corresponding epipolar line space on right according to formula (6).
Input:
1. the inside and outside element of orientation (K that in stereogram, two kinds of images are corresponding respectively 0, R 0, C 0) and (K 1, R 1, C 1);
2. the pixel coordinate L (x, y) of left middle impact point;
3. depth range (the λ in stereogram corresponding ground region min, λ max);
4. sampling interval r;
Export: corresponding epipolar line space corresponding with left in right.
The false code of unequal interval deep space sampling algorithm is as shown in table 1:
Table 1
2, the Daisy algorithm improved
The correlativity between the calculating of the Daisy algorithm after improvement pixel is have employed in this method.Compared with other match measures, have stronger reliability and execution efficiency, the correlativity be highly suitable between point of density cloud measures.Daisy operator brief introduction after improvement is as follows:
● Daisy algorithm is summarized
Daisy descriptor is developed by SIFT and GLOH descriptor, and GLOH and SIFT close relation, so Daisy obtains after improving via SIFT after all.It is that computing velocity is fast that Daisy and SIFT compares maximum advantage, and this mainly has benefited from Daisy to the improvement of image regional area Descriptive strategies and eliminate metric space structure and key point principal direction assesses this two committed steps.And just because of these two steps, just make SIFT descriptor have rotation, yardstick and the unchangeability to illumination.So Daisy operator in theory neither possesses scale invariability and does not also possess rotational invariance.Because Daisy improves via Sift, that first introduces Sift descriptor here realizes principle, and then derives the corresponding improvement strategy of Daisy descriptor.
● SIFT descriptor principle summary
The maximum feature of sift descriptor is the unchangeability possessing yardstick and rotation.Original author David G.Lowe realizes scale invariability by structure image metric space; Rotational invariance is realized to histogram by the ladder calculated in the certain territory of key point; The robustness to illumination variation is realized by the normalization of difference (DOG) between the image in pyramid structure and final description vectors.Therefore, under the specific direction only in particular dimensions image, 128 dimensional vectors calculating certain key point really could realize two above mentioned unchangeability to this point and describe.And do not comprise image yardstick due to Daisy operator itself and put the calculating of position principal direction, therefore this section mainly introduces the generative process of sift descriptor 128 dimensional vector.
For certain key point in image.The image of the corresponding yardstick in this some position navigates to this position, simultaneously centered by current point, this principal direction is that direct north opens up a local image window, and all pixel values in this window all need to obtain after raw video bilinear interpolation, as shown in Figure 1.In Fig. 1, the central point of grid is key point to be described.
Key point neighborhood is divided into 4 sub regions of 2*2, in every sub regions, comprises 4*4 pixel.Calculate the Grad of each pixel on eight different directions respectively, so every sub regions just has eight corresponding gradient map.Eight gradient map corresponding in the level and smooth each subarea of Gaussian filter utilizing standard deviation to be current image blur level 1.5 times (are equivalent to and are weighted the Grad of different pixels point in subarea, the weight of each some position and the distance of this ion district center point position, position are inversely proportional to), obtain gradient map after eight convolution.The all cum rights Grad often opened after convolution in gradient map are added, just obtain the gradient description amount of each subarea at eight different directions, as shown in Figure 2.In FIG, divided altogether 4 subareas, each subarea has the gradient description amount in 8 directions.Namely in this instance, a key point has 2*2*8=32 description amount.In fact, sift operator is the subarea surrounding neighbors of key point being divided into 4*4, also namely finally can obtain the description amount of 4*4*8=128 dimension.In order to make descriptor have certain robustness to illumination variation, also need 128 dimensional vectors to be normalized, even if the normalization description vectors of final production 128 dimensional feature vectors that also sift is well-known.
● Daisy descriptor principle summary
Daisy descriptor is compared with SIFT, be equally by key point around field be divided into some subareas, and using the weighted gradient on N number of for each subarea different directions with as description amount, difference is the dimension setting of initial parameter (the final dimension of Daisy descriptor place one's entire reliance upon) of the division methods in subarea, the account form of description amount and final proper vector.Fig. 3 illustrates the general shape of Daisy descriptor, and table 2 illustrates parameters and the implication of Daisy descriptor.
Table 2 Daisy operator parameter is had a guide look of
● Daisy algorithm flow
Current key point to be described, is designated as P, and centered by P point, R pixel is that radius opens up a circular window.With for angle intervals, the Grad of all pixels on H direction in calculation window, and give current pixel value by Grad, generate gradient map G in the direction in which 0, its computing formula is:
G 0 = ( &PartialD; I &PartialD; H ) + - - - ( 2 )
Wherein, I represents raw video, then represent the Grad asking for each point in image in H direction, calculating formula (a) +=max (a, 0), each like this window can obtain H and open corresponding gradient map.
In fact the determination of position, subarea is exactly the determination of subarea center position.Formula (3) is set to center with key point, adopts polar coordinate system to calculate the position coordinates of different subareas central point.
r i = R ( i + 1 ) Q &theta; j = 2 &pi;j T - - - ( 3 )
Wherein, i represents the number of plies at place, subarea; r irepresent the distance between center to key point, current subarea; θ jrepresent the angle of line and horizontal direction between this point and key point, 0≤j < T, j ∈ N; R, Q, T are Daisy operator parameter, and details refer to table 2.
The radius size in each subarea is equal to the standard deviation size of the Gaussian filter acted on this subarea, and its value is calculated by formula (4).
&sigma; i = R ( i + 1 ) 2 Q - - - ( 4 )
Wherein, i represents the number of plies at place, current subarea; R, Q are all Daisy operator parameter, σ irepresent the radius in current subarea.
Adopt the Gaussian filter that standard deviation is equal with subarea radius size smoothing to each subarea, also namely to gradient weightings all in subarea.Gradient after weighting in same subarea is added acquisition statistic M.Due to the gradient image in a total H direction, in each gradient image, have S subarea, therefore the final proper vector formed has H*S dimension.
The normalization mode adopted in original Daisy algorithm is normalized as one group the statistic in 8 directions corresponding to each subarea, and then by the description amount composition characteristic vector after each group of normalization.The fundamental purpose done like this is the correct description in order to ensure to put near shaded areas in image position as far as possible.And normalization strategy should be adjusted according to real needs in actual applications.
● the improvement of Daisy algorithm in this method application
The improvement in the method for Daisy operator is mainly reflected in following four aspects:
1. the selection of yardstick image
Because Daisy operator is not be described a position based on metric space.Therefore Daisy operator itself does not have scale invariability.In order to reduce the impact that image dimensional variation is measured final description as far as possible, need the different scale in advance between assessment image, image less for different scale is calculated Daisy description amount to as target.
2. the determination in key point direction
From above, in the computation process of Daisy descriptor, do not relate to an assessment for position principal direction, therefore pure Daisy operator also do not possess rotational invariance.In order to make Daisy descriptor possess this performance, need the principal direction obtaining key point, and local window rotation is calculated proper vector again to principal direction.In the method only need along perpendicular to core line direction calculating proper vector.
3. the adjustment of descriptor normalization strategy
In the method, in order to increase the stability of descriptor, descriptor will carry out global normalization, instead of as adopting local normalization strategy in units of subarea in original algorithm.
4. the adjustment of parameter
In the method, each optimum configurations is R=8, Q=2, T=4, H=4, S=9, Ds=36, and also namely final descriptor is 36 dimensions.
3, continuity is forced to check algorithm
This method utilizes pressure continuity to check algorithm and eliminates outlier, obtains final dense three-dimensional point cloud.Force continuity to check algorithm to be specially:
Function: utilize many covers three dimensional point cloud that same ground object area is corresponding, wherein one group of data as benchmark, will reject outlier.
Input: many groups three dimensional point cloud that areal is corresponding.
Export: the dense three-dimensional cloud data after excluding gross error.
Step:
1) cloud data is inputted, setting limit difference limen value error_threshold, setting continuity threshold value c_threshold.
2) choose wherein a sets of data as benchmark cloud data B.
3) another set of data except benchmark cloud data are chosen as checking data ci.
4) travel through benchmark dataset and have a cloud with the institute checking data centralization, calculate the deviate e of each point according to formula (5) respectively, as e < error_threshold, just the continuity of this some correspondence counting is added one.
e = ( x B - x C i ) 2 + ( y B - y C i ) 2 + ( z B - z C i ) 2 - - - ( 5 )
5) all three dimensional point clouds are traveled through, by 3)-4) step re-treatment.
6) finally the three-dimensional point that all continuity countings are less than c_threshold retained, the point these remained exports as final three-dimensional point cloud.

Claims (5)

1., based on a point of density cloud extracting method for unmanned plane low latitude high resolution image, it is characterized in that, comprise the following steps:
1) open in unmanned plane image sequence at N, select an image as reference map, all the other are as pairing image;
2) N-1 is opened pairing image to match with reference images respectively, composition stereogram;
3) in units of a stereogram, all pixels in traversal reference map, adopt unequal interval depth-sampling algorithm to each pixel, search for the corresponding epipolar line space in corresponding pairing image;
4) adopt Daisy algorithm to calculate the Daisy description amount of each point and corresponding pixel points in reference map in corresponding epipolar line space respectively, and distribute according to the correlation probabilities that formula (1) calculates point-to-point transmission, obtain point set of the same name:
P ( d ) = 1 Z exp ( - | | D base - D match | | 2 &sigma; ) - - - ( 1 )
Wherein, Z is normalization coefficient, and σ represents the variance that correlation probabilities distributes, D basefor the Daisy description amount of pixel in reference map, D matchfor the Daisy description amount of pairing image corresponding epipolar line mid point;
5) the inside and outside element of orientation utilizing image corresponding and point set of the same name, extract the dense three-dimensional point cloud that each stereogram is corresponding;
6) traversal pairing image, repeats 3)-5), extract N-1 cover three-dimensional point and converge;
7) utilize pressure continuity to check algorithm and reject outlier, obtain final dense three-dimensional point cloud.
2. a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image according to claim 1, it is characterized in that, described step 3) in, unequal interval depth-sampling algorithm is: according to depth range and the sampling interval in internal and external orientation corresponding to each image in the pixel coordinate of impact point in reference map, stereogram, stereogram corresponding ground region, obtain corresponding epipolar line space corresponding with described impact point in corresponding pairing image.
3. a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image according to claim 1, is characterized in that, in described Daisy algorithm, key point direction is only along the direction perpendicular to corresponding epipolar line.
4. a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image according to claim 1, it is characterized in that, in described Daisy algorithm, optimum configurations is as follows:
The radius of action R=8 of descriptor, subarea number of plies Q=2, every straton district number T=4, gradient direction number H=4, participate in the subarea number S=9 of computing, the final dimension Ds=36 of descriptor.
5. a kind of point of density cloud extracting method based on unmanned plane low latitude high resolution image according to claim 1, is characterized in that, described step 7) in, force continuity to check algorithm concrete steps as follows;
701) obtain N-1 cover three-dimensional point to converge, setting limit difference limen value error_threshold and continuity threshold value c_threshold;
702) choose wherein a sets of data as reference point cloud data set B;
703) another set of data except benchmark cloud data are chosen as checking data set C i;
704) travel through reference point cloud data set and have a cloud with the institute checking data centralization, according to formula (2) respectively Calculation Basis cloud data concentrate the deviate e of each point, as e<error_threshold, the continuity of this some correspondence counting is added one;
e = ( x B - x C i ) 2 + ( y B - y C i ) 2 + ( z B - z C i ) 2 - - - ( 2 )
705) according to step 703), 704) travel through all three dimensional point clouds;
706) retain the three-dimensional point that all continuitys counting is less than c_threshold, the point these remained exports as final three-dimensional point cloud.
CN201510098230.0A 2015-03-05 2015-03-05 Point off density cloud extracting method based on unmanned plane low latitude high resolution image Expired - Fee Related CN104751451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510098230.0A CN104751451B (en) 2015-03-05 2015-03-05 Point off density cloud extracting method based on unmanned plane low latitude high resolution image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510098230.0A CN104751451B (en) 2015-03-05 2015-03-05 Point off density cloud extracting method based on unmanned plane low latitude high resolution image

Publications (2)

Publication Number Publication Date
CN104751451A true CN104751451A (en) 2015-07-01
CN104751451B CN104751451B (en) 2017-07-28

Family

ID=53591070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510098230.0A Expired - Fee Related CN104751451B (en) 2015-03-05 2015-03-05 Point off density cloud extracting method based on unmanned plane low latitude high resolution image

Country Status (1)

Country Link
CN (1) CN104751451B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105701478A (en) * 2016-02-24 2016-06-22 腾讯科技(深圳)有限公司 Method and device for extraction of rod-shaped ground object
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium
WO2021185322A1 (en) * 2020-03-18 2021-09-23 广州极飞科技有限公司 Image processing method and related device
CN117152040A (en) * 2023-10-26 2023-12-01 埃洛克航空科技(北京)有限公司 Point cloud fusion method and device based on depth map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100648882B1 (en) * 2005-09-28 2006-11-27 강민영 Apparatus and method for calculating inertia value in navigation of unmanned aerial vehicle
US20090021423A1 (en) * 2007-07-19 2009-01-22 Cheng Shirley N Method and apparatus for three dimensional tomographic image reconstruction of objects
CN101424530A (en) * 2008-12-09 2009-05-06 武汉大学 Method for generating approximate kernel line of satellite stereo image pairs based on projection reference surface
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104197898A (en) * 2014-09-15 2014-12-10 中国人民解放军总参谋部测绘研究所 Epipolar ray image generating method of linear array satellite remote sensing image based on projection track method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100648882B1 (en) * 2005-09-28 2006-11-27 강민영 Apparatus and method for calculating inertia value in navigation of unmanned aerial vehicle
US20090021423A1 (en) * 2007-07-19 2009-01-22 Cheng Shirley N Method and apparatus for three dimensional tomographic image reconstruction of objects
CN101424530A (en) * 2008-12-09 2009-05-06 武汉大学 Method for generating approximate kernel line of satellite stereo image pairs based on projection reference surface
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104197898A (en) * 2014-09-15 2014-12-10 中国人民解放军总参谋部测绘研究所 Epipolar ray image generating method of linear array satellite remote sensing image based on projection track method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周婷: "基于双目视觉的无人机空中加油特征匹配方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105701478A (en) * 2016-02-24 2016-06-22 腾讯科技(深圳)有限公司 Method and device for extraction of rod-shaped ground object
CN105701478B (en) * 2016-02-24 2019-03-26 腾讯科技(深圳)有限公司 The method and apparatus of rod-shaped Objects extraction
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium
WO2021185322A1 (en) * 2020-03-18 2021-09-23 广州极飞科技有限公司 Image processing method and related device
CN117152040A (en) * 2023-10-26 2023-12-01 埃洛克航空科技(北京)有限公司 Point cloud fusion method and device based on depth map
CN117152040B (en) * 2023-10-26 2024-02-23 埃洛克航空科技(北京)有限公司 Point cloud fusion method and device based on depth map

Also Published As

Publication number Publication date
CN104751451B (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN103954283B (en) Inertia integrated navigation method based on scene matching aided navigation/vision mileage
Yahyanejad et al. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs
CN103411609B (en) A kind of aircraft return route planing method based on online composition
CN104200523B (en) A kind of large scene three-dimensional rebuilding method for merging additional information
CN102435188B (en) Monocular vision/inertia autonomous navigation method for indoor environment
CN111462329A (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
CN104751451A (en) Dense point cloud extracting method of low-altitude high resolution image based on UAV (Unmanned Aerial Vehicle)
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN105096386A (en) Method for automatically generating geographic maps for large-range complex urban environment
CN109492580B (en) Multi-size aerial image positioning method based on neighborhood significance reference of full convolution network
CN104867126A (en) Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN104484668A (en) Unmanned aerial vehicle multi-overlapped-remote-sensing-image method for extracting building contour line
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN102426019A (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN108917753B (en) Aircraft position determination method based on motion recovery structure
Ma Building model reconstruction from LiDAR data and aerial photographs
CN105844587A (en) Low-altitude unmanned aerial vehicle-borne hyperspectral remote-sensing-image automatic splicing method
CN105550994A (en) Satellite image based unmanned aerial vehicle image rapid and approximate splicing method
Qian et al. Robust visual-lidar simultaneous localization and mapping system for UAV
CN111242000A (en) Road edge detection method combining laser point cloud steering
CN106096621A (en) Based on vector constraint fall position detection random character point choosing method
CN116222577B (en) Closed loop detection method, training method, system, electronic equipment and storage medium
Sakai et al. Large-scale 3D outdoor mapping and on-line localization using 3D-2D matching
Baker et al. Using shorelines for autonomous air vehicle guidance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170728

Termination date: 20200305