CN1312633C - Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data - Google Patents

Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data Download PDF

Info

Publication number
CN1312633C
CN1312633C CNB2004100311577A CN200410031157A CN1312633C CN 1312633 C CN1312633 C CN 1312633C CN B2004100311577 A CNB2004100311577 A CN B2004100311577A CN 200410031157 A CN200410031157 A CN 200410031157A CN 1312633 C CN1312633 C CN 1312633C
Authority
CN
China
Prior art keywords
point
registration
viewpoint
data
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2004100311577A
Other languages
Chinese (zh)
Other versions
CN1684105A (en
Inventor
张爱武
孙卫东
胡少兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB2004100311577A priority Critical patent/CN1312633C/en
Publication of CN1684105A publication Critical patent/CN1684105A/en
Application granted granted Critical
Publication of CN1312633C publication Critical patent/CN1312633C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an automatic registration method for large-scale three-dimensional scene multi-viewpoint laser scanning data, which comprises the following steps: acquiring data, placing a laser range finder in front of a detected scene, and adjusting the laser range finder to enable a Z axis of the laser range finder to be vertical to the ground, wherein X and Y axes are parallel to the ground; scanning the detected scene row by row and column by column to obtain three-dimensional data of each viewpoint; the scanning data of adjacent viewpoints are required to keep 10-20% of overlap; extracting the structure of the measured object and extracting the characteristics of the data of the measured object; calculating virtual characteristics and constructing characteristic units, performing coarse registration and fine registration and global registration, and finally establishing a registration model, wherein the method can solve the problem of automatic registration of large-scale three-dimensional scene multi-view laser scanning data without any external cooperation equipment and manual participation; the method has universal applicability, and is suitable for scenes of modern structures and complex great-fun cultural relics and ancient buildings.

Description

Extensive many viewpoints of three-dimensional scenic laser scanning data autoegistration method
Technical field
The invention belongs to the three-dimensional data method for registering in computer information processing field, particularly a kind of extensive many viewpoints of three-dimensional scenic laser scanning data autoegistration method.
Background technology
The three-dimensional data registration is the gordian technique of digitizing and reverse engineering.Particularly along with the appearance of medium and long distance three-dimensional laser scanning technique; simplified the three-dimensional data acquisition process; promoted the development of association areas such as digital city, digital navigation, virtual reality, city planning, cultural relic digitalization protection, enjoyed attention based on the 3 D laser scanning data on ground.Describe in order to obtain extensive three-dimensional scenic full geometry, need gather the laser three-dimensional scanning data, so existence is with the laser three-dimensional scanning data-switching of the different points of view registration problems at the same coordinate system at tens even a hundreds of viewpoint position.
The research spininess of traditional many viewpoints three-dimensional data registration is to single object, most representative is the closest approach iterative algorithm, be that this method of ICP algorithm is sought corresponding point by iteration in two viewpoint laser scanning point sets, the position of calculating two viewpoints concerns that weak point need to be the artificial initial conversion of setting to estimate, easily be absorbed in local minimum, iteration time is long, be not suitable for extensive registration and have two kinds for large scale scene method for registering commonly used: target is put in purchased equipment method target reference mark method requirements such as target reference mark method and GPS in the adjacent viewpoint overlapping region, utilize laser that the special reflection characteristic of target is sought the target reference mark automatically, as long as the reference mark of the same name more than three is arranged in the laser scanning data of adjacent viewpoint, if can obtain and have the material that has identical reflection characteristic with target in the coordinate transformation relation scene of adjacent viewpoint, can mistake know target, registration failure target reference mark method needs a large amount of manual interventions, and it mainly is to obtain the pose parameter of laser scanner in each viewpoint by purchased equipment that some environment can't be placed purchased equipment method such as target GPS, equipment price costliness not only, and the pose parameters precision is not high, therefore registration results and actual conditions variant, be badly in need of a kind ofly directly the seamless amalgamation of 3 D laser scanning data of each viewpoint not being analyzed by literature search at automatically extensive many viewpoints of three-dimensional scenic three-dimensional data method for registering of the same coordinate system by any purchased equipment, find that I.Stamos has proposed employing feature straight line and realized adjacent viewpoint three-dimensional laser data automatic registration method in the article of delivering in 2003 that is called " based on the city large scale scene range data autoregistration (AutomatedFeature-Based Range Registration of Urban Scenes of Large Scale) of feature ", and claim that linear feature has reduced geometric complexity, being fit to this method weak point of extensive registration is only to adopt linear feature not have ubiquity, usually fail in addition for the three-dimensional scenic registrations such as large-scale historical relic ancient architecture that linear feature is less, failing to solve because of practical factor such as blocking influences the registration problems that the overlapping region lacks effective linear feature
Said method is intended to solve two viewpoint three-dimensional data registrations of overlapping region, as for tens even a hundreds of viewpoint, must take the effective registering strategy its three-dimensional data can not to be had the cutting edge of a knife or a sword amalgamation after the same coordinate system registration strategies commonly used has sequence registration and synchronous registration sequence registration to refer to a pair of viewpoint registration, another begins registration to viewpoint, and must comprise last to a viewpoint in the viewpoint, all linked with one another till all viewpoints of traversal the advantage of sequence registration strategies be that the same time has only a pair of viewpoint to participate in registration, committed memory is few; Shortcoming is to exist bigger cumulative errors, closed-loop shaped scene particularly, occurring the synchronous registration in slit between the first and last viewpoint then is all viewpoints registrations simultaneously, cumulative errors can not appear, the registration accuracy height, but calculated amount is big, requires high sequence registration and synchronous registration all to be not suitable for large scale scene to computing equipment
Summary of the invention
The purpose of this invention is to provide a kind of not by any equipment for public use directly with the method for registering of extensive many viewpoints of three-dimensional scenic laser scanning data automatic seamless amalgamation at the same coordinate system
Technical scheme of the present invention is as follows:
Extensive many viewpoints of three-dimensional scenic laser scanning data autoegistration method provided by the invention comprises the steps:
1) data are obtained
Because large scale scene is outdoor scene especially, not only comprise main building structure, and contain shelters such as trees Pedestrians and vehicles, and laser passes at random some shelter of transparent substance formation between three-dimensional laser scanner and main building structure, and laser passes transparent substance and forms point at random and be positioned at after the main building structure when measuring, and places laser range finder before tested scene, adjusts three-dimensional laser scanner, make its Z axle vertical ground, X and Y-axis are parallel to the ground; Tested scene is scanned line by line, obtain the three-dimensional data of each viewpoint; The adjacent viewpoint scan-data keep 10%~20% overlapping;
2) extract main building structure
Import the laser three-dimensional scanning data of each viewpoint correspondence, calculate the horizontal range of each vertical scan line up-sampling point and laser range finder, and, utilize hough transform detection of vertical line segment according to horizontal range; Calculate the length of perpendicular line with its contained counting, and to select the horizontal range of the longest vertical line segment correspondence be benchmark, δ before the benchmark 1δ after rice and the benchmark 2Data point in the rice scope is main building structure; Described δ 1Be 1 meter, δ 2It is 0.5 meter;
Because laser range finder is to sample line by line to scene, for each vertical scan line, the point that belongs to same vertical plane on it is equal to the horizontal range of laser range finder, and main building structure is generally perpendicular to the ground.The present invention analyzes each vertical scan line one by one, and according to the horizontal range of sampled point to laser range finder, the method for employing statistical distribution is determined main building structure to be extracted the regional extent of main building structure;
3) data point of the main building structure that keeps is carried out feature extraction
Three-dimensional feature has reflected the architectural feature of scene inherence, has simplified the geometry complexity of scene, is effective primitive of many viewpoints three-dimensional data registration.The present invention at first adopts the local curve fitting of weighting processing horizontal sweep trace and vertical scan line respectively, calculates a subdifferential of sweep trace each point; Then according to the differential sign of the 2 neighborhood points up and down of point and point with its about the distance detecting scene marginal point of 2 neighborhood points; Utilize the chain code following technology to form different boundary chains then, and the point of same boundary chain is fitted to end to end one group of line segment, be called the feature line segment.The intersection point of adjacent feature line segment is a unique point;
A) each point differential on the sweep trace
Carry out the local secondary match for the sampled point on every sweep trace with (1) formula, and with this point along scan-line direction up and down k neighbor point be fit interval [K, K], K is 3-5;
d ^ ( i ) = a 0 + a i + a 2 i 2 - - - ( 1 )
In the formula: the ordering of i-sweep trace up-sampling point, i=0,1 ..., n;
Figure C20041003115700082
-Di i point is to the calculated level distance of laser range finder;
a 0, a 1, a 2The fitting coefficient of-fit equation (1) formula;
Adopt adaptive weighted approximating method, ask for fitting coefficient a 0, a 1, a 2, and the weighted fitting error function is
E ' ( i ) = Σw ( i ) { d ( i ) - d ^ ( i ) } 2 - - - ( 2 )
In the formula: d (i)-i point is to the real standard distance of laser range finder;
The weights that i is ordered on w (i)-sweep trace, the reliability of representative point; Adopt iteration weights revised law, progressively eliminate the influence of partial points, make only seemingly to close and in reliable point, carry out; Concrete grammar is as follows: suppose that coefficient is a in known the m time iteration 0 m, a 1 m, a 2 m, ask that coefficient is a in the m+1 time iteration 0 M+1, a 1 M+1, a 2 M+1At first count
Calculate the match residual volume of the m time iteration and the weights of the m time iteration;
e k ( i ) = ( d ( i ) - d ^ m ( i ) ) 2 - - - ( 3 )
w m + 1 ( i ) = 1 e m ( i ) ≤ b s b s e m ( i ) e m ( i ) > b s - - - ( 4 )
In the formula: b s-error allows threshold value;
Initial weight w 0(i)=1, constantly revise fitting coefficient and weights with (3), (4) formula, until matched curve satisfy error requirements and comprise count at most till; Mark participates in the point of match;
Then, (1) formula is asked a subdifferential, and calculate a subdifferential of underlined each point on the sweep trace;
∂ d ^ ( i ) ∂ i = a 1 + 2 a 2 i - - - ( 5 )
B) detected edge points
For markd each point on the sweep trace, investigate it along a scan-line direction underlined differential sign up and down at 2, selecting the point of contrary sign is the normal direction point of discontinuity, and is labeled as 0;
For markd each point on the sweep trace, calculate itself and underlined 2 distances about scan-line direction, investigate wherein ultimate range, selecting ultimate range is degree of depth point of discontinuity greater than threshold epsilon, and is labeled as 1;
C) unique point line drawing
At first adopt the chain code following method that marginal point is distinguished into different boundary chains, and with the clockwise ordering of pressing on each boundary chain, the method that merges by division fits to one group of end to end feature line segment with each boundary chain then, concrete grammar is as follows:
Initialization
Boundary chain is divided into isometric plurality of sections L 0, L 1..., L n
Division
Utilize least square method to current each section L iFit to straight line.Selection from this fitting a straight line farthest and least residual E iGreater than what allow threshold value T be benchmark a bit, and this section boundary chain is broken as two sections, and continuous division is till satisfy error of fitting; Handle other each section successively;
Merge
Select the match residual error to be merged into one section and to be the feature line segment less than adjacent two sections of threshold value T, the end points of feature line segment is a unique point;
4) calculate imaginary characteristics and structural attitude unit
The marginal information of the corresponding scene of characteristic curve, the angle point or the cusp of the corresponding scene of unique point, yet they have constituted scene basic structure framework, because real scene exists inevitably and blocks phenomenon, if serious shielding, in the overlay region of two viewpoints, lack validity feature, cause registration failure the present invention to propose the imaginary characteristics notion, the disjoint characteristic curve elongated segment of reality is intersected formation imaginary characteristics line segment and imaginary characteristics point, and add in the feature line segment and unique point of said extracted, imaginary characteristics has illustrated the geometry site between the feature;
The present invention is nested into feature unit PU{P|L with unique point together with its two adjacent feature line segments 1, L 2:<n 1, n 2, n 3, p〉}.Feature unit comprises abundant geological information, i.e. three direction vectors and a position vector: two feature line segment L 1And L 2Direction vector (n 1, n 2), direction vector points to line segment by unique point, and the 3rd vector is the resultant vector of preceding two vectors, reflected that adjacent two feature line segments constitute the normal direction n on plane 3=n 1* n 2, * for the vector multiplication cross position vector be unique point p;
5) registration
The adjacent viewpoint that local registration refers to have the overlapping region is registration in twos.The present invention adopts by thick registration mode to essence, and two viewpoint data are merged fully;
Carry out thick registration earlier: utilize feature unit that the automatic amalgamation of the three-dimensional data of two viewpoints just can be constructed space conversion relation at a pair of feature unit of the same coordinate system: from the direction of characteristic of correspondence line and synthetic method thereof to calculating rotation matrix; From characteristic of correspondence point to calculating the given a pair of viewpoint S that belongs to respectively of translation vector 1With viewpoint S 2Feature unit PU{P|L 1, L 2:<n 1, n 2, n 3, P〉} ∈ S 1And PU ' P ' | L 1', L 2':<n 1', n 2', n 3', p '〉∈ S 2, rotation matrix is by n 1And n 1Calculate ' (i=1,2,3), though following formula error function minimum, and adopt the plain method of quaternary to obtain rotation matrix;
Err = Σ i = 1 3 | | n 1 ′ - R n i | | 2 - - - ( 6 )
If p and p ' at the coordinate of local coordinate system separately be (x, y, z) with (x ', y ', z '), compute vector;
t = R x y z - x ′ y ′ z ′ - - - ( 7 )
Travel through N candidate conversion of all feature units structure and estimate, it is as follows with the constraint condition of the feature unit number calculated characteristics units match that obtains mating to adopt matching degree to determine optimal registration conversion estimation matching degree:
Position constraint
P ' is viewpoint S 2In with viewpoint S 1In the nearest unique point of p point, and p and between's p ' distance is less than the threshold value Δ 1
Direction constrain
L iWith L iDistance between ' two line segment place straight lines and angular separation are respectively less than Δ 2And Δ 3
Carry out smart registration again: through after the thick registration, the basic amalgamation of three-dimensional data of two viewpoints that has the overlapping region is in the same coordinate system, only poorer slightly in some detail section quality of registration, need further smart registration concrete grammar: 1) on the basis of thick registration, determine overlay region 2) overlay region is divided into the N piece, every contains u * v point 3) calculate the mean difference of every corresponding point, finding out the M piece that differs greatly utilizes the ICP algorithm to carry out smart registration in adaptive district as adaptive district, can significantly reduce operation time 4) utilize the laser reflectivity of laser sampling point and put position information, seek corresponding point in conjunction with mahalanobis distance
Above the many viewpoints three-dimensional scenic more than 2, described step also comprises global registration for viewpoint:
Global registration refers to all viewpoint three-dimensional data amalgamations to calculate matching degree at the strategy of the same coordinate system all viewpoints registration in twos at first; Being node with the viewpoint again, is to link power with the matching degree, generates undirected weighted graph; Utilize the minimum spanning tree principle to set up the global registration illustraton of model then and determine a fixed view S by the global registration illustraton of model a, and the data that make other viewpoint to transform the concrete grammar of wherein setting up the global registration model to it as follows:
If the transition matrix that i calculates two viewpoints that the overlay region is arranged is T l=[R i, t i] and corresponding local matching degree be g (T i); With the viewpoint is node, with g (T i) be the node connection weight, generate one and comprise all viewpoints at interior undirected cum rights connected graph G=<V, E, V is a node, E is the limit, and e i=g (T i) ∈ E, and adopt progressively the short circuit method to obtain minimum spanning tree, set up the global registration model
Technique effect of the present invention is:
The present invention is not by any purchased equipment and the artificial method that participates in, having solved extensive many viewpoints of three-dimensional scenic laser scanning data autoregistration it is advantageous that: 1) structural attitude unit, relation between the different characteristic has been described, simplified geometric complexity, be more suitable for extensive registration 2) introduce imaginary characteristics solved because of factor affecting adjacent viewpoint overlay region such as blocking lack the registration problems 3 of validity feature) employing minimum spanning tree principle set up the global registration illustraton of model, registration accuracy reliable 4) method provided by the invention has general applicability, not only is applicable to the scene of modern structure but also be applicable to complicated large-scale historical relic and ancient architecture
Description of drawings
Fig. 1 carries out laser scanning for the present invention to the scene measured object synoptic diagram;
Fig. 2 (a) is a vertical scanning face inner laser sampling point distributions situation;
Fig. 2 (b) is the statistical conditions of vertical scanning face inner laser sampled point by the horizontal projection value;
Fig. 3 (a) and 3 (b) are the example that main building structure is extracted;
Fig. 4 (a), 4 (b), 4 (c) 4 (d) and 4 (e) are two viewpoint three-dimensional data registration examples under the serious shielding situation;
Fig. 5 is a global registration model generative process;
Fig. 6 (a), 6 (b) are the global registration example;
Embodiment
Below in conjunction with accompanying drawing and specific embodiment technical scheme of the present invention is further described:
The present invention requires to import is the laser scanning data that the three-dimensional laser stadimeter of medium and long distance obtains.The operating distance of three-dimensional laser stadimeter is that 5cm/200m concrete implementation step of the present invention is as follows generally greater than 200m along the beam direction measuring accuracy:
Embodiment 1
1) data are obtained
Laser range finder is placed on the preceding 20-50 rice of tested scene, regulate stadimeter and make Z axle vertical ground, X and Y-axis are parallel to ground, tested scene is scanned (as Fig. 1) line by line, the three-dimensional data that obtains corresponding viewpoint in addition, the adjacent viewpoint scan-data keep 10%~20% overlapping so that make the smooth registration of each viewpoint three-dimensional data
2) extract main building structure
Import the laser scanning data of each viewpoint correspondence, calculate the horizontal range of each vertical scanning face each point and laser range finder, and according to horizontal range, utilize hough transform detection of vertical line segment [as Fig. 2 (a) and Fig. 2 (b)] to calculate the length of perpendicular line with its contained counting, selecting the horizontal range of the longest vertical line segment correspondence is benchmark d Mi, δ before the benchmark 1δ after rice and the benchmark 2Data in the rice scope are the distribution plan that main building structure keeps a certain vertical scanning face of Fig. 2 (a) laser sampling point, and as seen from the figure, keeping in the section is main building structure, and it keeps before the section and afterwards scenery and point at random are left out; Fig. 2 (b) is this vertical scanning face laser sampling point horizontal range statistical distribution synoptic diagram among Fig. 2 (a), and as seen from the figure, the point on the main building wall is on same perpendicular line, and promptly horizontal range equates
Present embodiment δ 1=1.0m, δ 2=0.5m can extract main building structure, Fig. 3 (a) and 3 (b) extract the panorama measured drawing of instance graph 3 (a) for measured object is carried out laser scanning line by line for main building structure, wherein A is main building structure (metope), and B is ground, and C is the shelter of main building structure; Fig. 3 (b) extracts the figure as a result of main building structure for using said method, and A is the main building structure (metope) that extracts among the figure
3) feature extraction
A) subdifferential of each point on the calculating sweep trace
Carry out the local secondary match for every sweep trace up-sampling point with (1) formula, and with this point along scan-line direction up and down k neighbor point be fit interval [k, k], K is 3-5; (K also can be 4 or 5 to present embodiment k=3 certainly; Can decide according to analyzing spot density,, belong to those skilled in the art of the present technique's required technical knowledge and skills) as for should what be got actually;
d ^ ( i ) = a 0 + a 1 i + a 2 i 2 - - - ( 1 )
In the formula: the ordering of i-sweep trace up-sampling point, i=0,1 ..., n; -Di i point is to the calculated level distance of laser range finder;
a 0, a 1, a 2The fitting coefficient of-fit equation (1) formula
Traditional method utilizes least square method to calculate fitting coefficient a according to the error of fitting function E (i) of (2) formula 0, a 1, a 2
E ( i ) = Σ { d ( i ) - d ^ ( i ) } 2 - - - ( 2 )
In the formula: d (i)-i point is to the real standard distance of laser range finder;
But least square method is insensitive to point of discontinuity drawn game exterior point, so adopt adaptive weighted curve-fitting method to ask for fitting coefficient a 0, a 1, a 2, formula (3) is in the weighted fitting error function formula: the weights that i is ordered on w (i)-sweep trace, the reliability of representative point.Adopt iteration weights revised law, progressively eliminate the influence of partial points, match is only carried out in reliable point.Concrete grammar is as follows:
Suppose that fitting coefficient is a in known the m time iteration 0 m, a 1 m, a 2 m, ask fitting coefficient a in the m+1 time iteration 0 M+1, a 1 M+1, a 2 M+1At first calculate the match residual volume of the m time iteration and the weights of the m time iteration
e k ( i ) = ( d ( i ) - d ^ m ( i ) ) 2 - - - ( 4 )
w m + 1 ( i ) = { 1 e m ( i ) ≤ b s b s e m ( i ) e m ( i ) > b s - - - ( 5 )
In the formula: b s-error allows threshold value.
Initial weight w 0(i)=1, utilize (4), (5) formula constantly to revise fitting coefficient and weights, to matched curve satisfy error requirements and comprise count at most till.Mark participates in the point of match.
Then, (1) formula is asked a subdifferential, and calculate a subdifferential of underlined each point on the sweep trace.
∂ d ^ ( i ) ∂ i = a 1 + 2 a 2 i - - - ( 6 )
B) detected edge points
For any one markd some p on the sweep trace, and markd up and down neighborhood point (p 1And p 2), detect neighborhood point (p 1And p 2) symbol of a subdifferential, if contrary sign then the p point be that normal direction is discontinuous, and be labeled as 0.
Calculate p and p 1And p 2Distance, if dis=max (dis (p, p 1), dis (p, p 2) ε (ε is a threshold value), then the p point is that the degree of depth is discontinuous, is labeled as 1.
C) unique point line drawing
At first adopt the chain code following method that marginal point is distinguished into different boundary chains, and with the clockwise ordering of pressing on each boundary chain, the method that merges by division fits to one group of end to end feature line segment with each boundary chain then, concrete grammar is as follows:
Initialization
Boundary chain is divided into isometric plurality of sections L 0, L 1..., L n
Division
Division
Utilize least square method to current each section L iFitting to straight line selects from this fitting a straight line farthest and least residual E iGreater than what allow threshold value T is benchmark a bit, this section boundary chain is broken as two sections, and constantly division is handled other each section successively till satisfying error of fitting
Merge
Select the match residual error to be merged into one section and to be the feature line segment less than adjacent two sections of threshold value T, the end points of feature line segment is a unique point
4) calculate imaginary characteristics and structural attitude unit
Select actual disjoint characteristic curve elongated segment intersect to form in the feature line segment of imaginary characteristics line segment and imaginary characteristics point adding said extracted and the unique point unique point is nested into feature unit PU{P|L together with its two adjacent feature line segments 1, L 2:<n 1, n 2, n 3, p〉} n 1And n 2Be feature line segment L 1And L 2Direction vector, n 3=n 1* n 2Be the 3rd vector that preceding two vectors are synthetic, * for vector multiplication cross p be unique point
5) registration
A) thick registration: the given a pair of two viewpoint S that belong to respectively 1And S 2Feature unit PU{P|L 1, L 2:<n 1, n 2, n 3, p〉} ∈ S 1And PU ' P ' | L 1', L 2':<n 1', n 2', n 3', p '〉} ∈ S 2, rotation matrix can be by n so iAnd n i' (i=1,2,3) obtain, even following formula error function minimum, and adopt the plain method of quaternary to obtain rotation matrix
Err = Σ i = 1 3 | | n 1 ′ - R n i | | 2 - - - ( 7 )
If p and p ' at the coordinate of local coordinate system separately be (x, y, z) with (x ', y ', z '), compute vector
t = R x y z - x ′ y ′ z ′ - - - ( 8 )
The every pair of character pair unit obtains the registration conversion of a correspondence to be estimated, select the matching degree maximum to estimate forbidden word feature unit matching constraints as follows for the optimal registration conversion:
Position constraint
P ' is viewpoint S 2In with viewpoint S 1In the nearest unique point of p point, and p and between's p ' distance is less than the threshold value Δ 1
Direction constrain
L iWith L iDistance between ' two line segment place straight lines and angular separation are respectively less than Δ 2And Δ 3B) smart registration: estimate by the conversion that thick registration obtains, with the three-dimensional data amalgamation of two viewpoints at the same coordinate system, determine the overlapping region overlay region of two viewpoint correspondences is divided into 10 respectively, calculating 3 of mean difference selection differences maximum of every interior corresponding point utilizes the ICP algorithm to carry out smart registration for increasing the robustness of corresponding point search in adaptive district as adaptive district, utilize the reflectivity and the three-dimensional information of laser sampling point, with the difference of 2 P ∈ S1 of mahalanobis distance measurement and P ' ∈ S2, promptly
d 2 ( p , p ′ ) = min p ′ ∈ S 2 ( p - p ′ ) T C p ′ - 1 ( p - p ′ ) - - - ( 9 )
In the formula: d 2(p ' p ')-p and p ' mahalanobis distance;
C P 'The covariance of-P ' and k * k neighborhood point, present embodiment k=3
Fig. 4 (a), 4 (b) 4 (c) 4 (d) and 4 (e) are hostel two viewpoint three-dimensional data registration results; The laser sampling point cloud of Fig. 4 (a) for obtaining in the S1 viewpoint; The laser sampling point cloud of Fig. 4 (b) for obtaining in the S2 viewpoint; Fig. 4 (c) and Fig. 4 (d) are for utilizing the thick registration results of said method, and wherein Fig. 4 (c) does not adopt imaginary characteristics, and Fig. 4 (d) has adopted imaginary characteristics; Fig. 4 (e) is for to utilize said method to carry out the result of smart registration
Embodiment 2
For using method of the present invention the laser scanning data autoregistration of 8 viewpoints of extensive three-dimensional scenic, its step 1)-5 are carried out in the student dining room) with embodiment 1, different is: because viewpoint outnumbers 2, need carry out global registration;
At first all (overlay region is arranged) viewpoints are carried out registration in twos, utilize the minimum spanning tree principle to set up the global registration model, then according to viewpoint S of global registration Model Selection aFixing, the laser scanning data conversion of other viewpoint is at S aIn the coordinate system at place
Specifically, each all can calculate transition matrix T to two viewpoints that the overlay region is arranged i=[R i, t i] and local matching degree g (T i) (to obtain counting of registration) be node with each viewpoint, with g (T i) be the connection weight of node, then can generate one and comprise all viewpoints at interior undirected cum rights connected graph G=<V, E, V is a node, E is limit (weight), and e i=g (T i) ∈ E.According to local matching degree g (T i), at undirected connection weighted graph G=<V, E〉in, adopt progressively the short circuit method can ask minimum spanning tree, set up the global registration model
Fig. 5 is its global registration model generative process synoptic diagram, and the node 12345 and 6 among the figure has been represented 6 viewpoints; In the 1st step, calculate all viewpoints local matching degree of registration in twos; The 2nd step is for to carry out short circuit with maximized two viewpoints of matching degree; The 3-5 step is carried out short circuit for two viewpoints that matching degree is taken second place, and finishes until all viewpoint short circuits; The 6th step was fixed as Sa for the viewpoint 4 that branch is maximum
Fig. 6 (a) is 8 viewpoint global registrations of present embodiment " student dining room " figure as a result; Fig. 6 (b) is the partial enlarged drawing at window position among Fig. 6 (a).

Claims (2)

1, a kind of extensive many viewpoints of three-dimensional scenic laser scanning data autoegistration method comprises the steps:
1) obtains data
Place laser range finder before tested scene, regulate laser range finder and make its Z axle vertical ground, X and Y-axis are parallel to ground, and tested scene is scanned line by line, obtain the three-dimensional data of each viewpoint; The adjacent viewpoint scan-data keep 10%~20% overlapping;
2) extract main building structure
Import the laser three-dimensional scanning data of each viewpoint correspondence, calculate the horizontal range of each vertical scan line up-sampling point and laser range finder, and, utilize hough transform detection of vertical line segment according to horizontal range; Calculate the length of perpendicular line with its contained counting, and to select the horizontal range of the longest vertical line segment correspondence be benchmark, δ before the benchmark 1δ after rice and the benchmark 2Data point in the rice scope is main building structure; Described δ 1Be 1 meter, δ 2It is 0.5 meter;
3) data point of the main building structure that keeps is carried out feature extraction
A) each point differential on the sweep trace
Carry out the local secondary match for the sampled point on every sweep trace with (1) formula, and with this point along scan-line direction up and down k neighbor point be fit interval, its fit interval [K, K] is [3,3];
d ^ ( i ) = α 0 + α 1 i + α 2 i 2 - - - ( 1 )
In the formula: the ordering of i-sweep trace up-sampling point, i=0,1 ..., n;
-Di i point is to the calculated level distance of laser range finder;
α 0, α 1, α 2The fitting coefficient of-fit equation (1) formula;
Adopt adaptive weighted approximating method, ask for fitting coefficient α 0, α 1, α 2, and the weighted fitting error function is
E ′ ( i ) = Σw ( i ) { d ( i ) - d ^ ( i ) } 2 - - - ( 2 )
In the formula: d (i)-i point is to the real standard distance of laser range finder;
The weights that i is ordered on w (i)-sweep trace, the reliability of representative point; Adopt iteration weights revised law, progressively eliminate the influence of partial points, match is only carried out in reliable point; Concrete grammar is as follows: suppose that coefficient is α in known the m time iteration 0 m, α 1 m, α 2 m, ask that coefficient is α in the m+1 time iteration 0 M+1, α 1 M+1, α 2 M+1At first calculate the match residual volume of the m time iteration and the weights of the m time iteration;
e k ( i ) = ( d ( i ) - d m ^ ( i ) ) 2 - - - ( 3 )
W m + 1 ( i ) = 1 e m ( i ) ≤ b s b s e m ( i ) e m ( i ) > b s - - - ( 4 )
In the formula: b s-error allows threshold value;
Initial weight w 0(i)=1, constantly revise fitting coefficient and weights with (3), (4) formula, until matched curve satisfy error requirements and comprise count at most till; Mark participates in the point of match;
Then, (1) formula is asked a subdifferential, and calculate a subdifferential of underlined each point on the sweep trace;
∂ d ( i ) ^ ∂ i = α 1 + 2 α 2 i - - - ( 5 )
B) detected edge points
For markd each point on the sweep trace, investigate it along a scan-line direction underlined differential sign up and down at 2, selecting the point of contrary sign is the normal direction point of discontinuity, and is labeled as 0;
For markd each point on the sweep trace, calculate itself and underlined 2 distances about scan-line direction, investigate wherein ultimate range, selecting ultimate range is degree of depth point of discontinuity greater than threshold epsilon, and is labeled as 1;
C) unique point line drawing
At first adopt the chain code following method that marginal point is distinguished into different boundary chains, and with the clockwise ordering of pressing on each boundary chain, the method that merges by division fits to one group of end to end feature line segment with each boundary chain then, concrete grammar is as follows:
Initialization
Boundary chain is divided into isometric plurality of sections L 0, L 1..., L n
Division
Utilize least square method to current each section L iFit to straight line, select farthest and least residual E from this fitting a straight line iGreater than what allow threshold value T be benchmark a bit, and this section boundary chain is broken as two sections, and continuous division is till satisfy error of fitting; Handle other each section successively;
Merge
Select the match residual error to be merged into one section and to be the feature line segment less than adjacent two sections of threshold value T, the end points of feature line segment is a unique point;
4) calculate imaginary characteristics and structural attitude unit
Select actual disjoint characteristic curve elongated segment to intersect and form in the feature line segment and unique point of imaginary characteristics line segment and imaginary characteristics point adding said extracted; Unique point is nested into feature unit PU{P|L together with its two adjacent feature line segments 1, L 2:<n 1, n 2, n 3, p〉}; n 1And n 2Be feature line segment L 1And L 2Direction vector, n 3=n 1* n 2Be the 3rd vector that preceding two vectors are synthetic, * for vector multiplication cross .p be unique point;
5) registration
A) thick registration
The given a pair of two viewpoint S that belong to respectively 1And S 2Feature unit PU{P|L 1, L 2:<n 1, n 2, n 3, p〉} ∈ S 1And PU ' P ' | L 1', L 2':<n 1', n 2', n 3', p '〉} ∈ S 2, rotation matrix can be by n so 1And n 1' (i=1,2,3) obtain, and make following formula error function minimum that is:, and adopt the plain method of quaternary to obtain rotation matrix;
Err = Σ i = 1 3 | | n i ′ - R n i | | 2 - - - ( 6 )
If p and p ' at the coordinate of local coordinate system separately be (x, y, z) with (x ', y ', z '), compute vector;
t = R x y z - x ′ y ′ z ′ - - - ( 7 )
Every pair of character pair unit obtains the registration conversion estimation of a correspondence, and what select the matching degree maximum is that the optimal registration conversion estimates that the feature unit matching constraints is as follows:
Position constraint
P ' is viewpoint S 2In with viewpoint S 1In the nearest unique point of p point, and p and between's p ' distance is less than the threshold value Δ 1
Direction constrain
L 1With L 1Distance between ' two line segment place straight lines and angular separation are respectively less than Δ 2And Δ 3
B) smart registration
Estimate by the conversion that thick registration obtains, the three-dimensional data amalgamation of two viewpoints at the same coordinate system, is determined the overlapping region; The overlay region of two viewpoint correspondences is divided into the N piece respectively, calculates the mean difference of every interior corresponding point; The M piece of selection differences maximum utilizes the ICP algorithm to carry out smart registration in adaptive district as adaptive district, for increasing the robustness of corresponding point search, utilizes the reflectivity and the three-dimensional information of laser sampling point, with the difference of 2 p ∈ S1 of mahalanobis distance measurement and p ' ∈ S2, promptly
d 2 ( p , p ′ ) = min p ′ ∈ S 2 ( p - p ′ ) T C P ′ - 1 ( p - p ′ ) - - - ( 8 )
In the formula: d 2(p, p ')-p and p ' 's mahalanobis distance;
C pThe covariance of '-p ' and k * k neighborhood point, k=3.
By described extensive many viewpoints of the three-dimensional scenic laser scanning data autoegistration method of claim 1, it is characterized in that 2, above the many viewpoints three-dimensional scenic more than 2, described step also comprises global registration for viewpoint:
At first all have the viewpoint of overlay region to carry out registration in twos, utilize the minimum spanning tree principle to set up the global registration model, then according to viewpoint S of global registration Model Selection aFixing, the laser scanning data conversion of other viewpoint is at S aIn the coordinate system at place, the concrete grammar of wherein setting up the global registration model is as follows:
If the transition matrix that i calculates two viewpoints that the overlay region is arranged is T 1=[R 1, t 1] and corresponding local matching degree be g (T i); With the viewpoint is node, with g (T i) be the node connection weight, generate one and comprise all viewpoints at interior undirected cum rights connected graph G=<V, E, V is a node, E is the limit, and e 1=g (T 1) ∈ E, and adopt progressively the short circuit method to obtain minimum spanning tree, set up the global registration model.
CNB2004100311577A 2004-04-13 2004-04-13 Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data Expired - Fee Related CN1312633C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100311577A CN1312633C (en) 2004-04-13 2004-04-13 Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100311577A CN1312633C (en) 2004-04-13 2004-04-13 Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data

Publications (2)

Publication Number Publication Date
CN1684105A CN1684105A (en) 2005-10-19
CN1312633C true CN1312633C (en) 2007-04-25

Family

ID=35263428

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100311577A Expired - Fee Related CN1312633C (en) 2004-04-13 2004-04-13 Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data

Country Status (1)

Country Link
CN (1) CN1312633C (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2016559A2 (en) * 2006-05-05 2009-01-21 Thomson Licensing System and method for three-dimensional object reconstruction from two-dimensional images
CN101726255B (en) * 2008-10-24 2011-05-04 中国科学院光电研究院 Method for extracting interesting buildings from three-dimensional laser point cloud data
CN102271255B (en) * 2011-08-09 2013-01-16 清华大学 Motion estimation method and device for dual-spelling stereo video coding
US9183631B2 (en) * 2012-06-29 2015-11-10 Mitsubishi Electric Research Laboratories, Inc. Method for registering points and planes of 3D data in multiple coordinate systems
CN102903117A (en) * 2012-10-24 2013-01-30 深圳大学 3D (three-dimensional) image registration method and device based on conformal geometric algebra
US9189702B2 (en) * 2012-12-31 2015-11-17 Cognex Corporation Imaging system for determining multi-view alignment
CN105787933B (en) * 2016-02-19 2018-11-30 武汉理工大学 Water front three-dimensional reconstruction apparatus and method based on multi-angle of view point cloud registering
CN110136179B (en) * 2018-02-08 2022-02-22 中国人民解放军战略支援部队信息工程大学 Three-dimensional laser point cloud registration method and device based on straight line fitting
CN110399892B (en) * 2018-04-24 2022-12-02 北京京东尚科信息技术有限公司 Environmental feature extraction method and device
CN108921095A (en) * 2018-07-03 2018-11-30 安徽灵图壹智能科技有限公司 A kind of parking occupancy management system neural network based, method and parking stall
CN109978919B (en) * 2019-03-22 2021-06-04 广州小鹏汽车科技有限公司 Monocular camera-based vehicle positioning method and system
CN110209160A (en) * 2019-04-29 2019-09-06 北京云迹科技有限公司 Barrier extracting method and device based on laser
CN110706357B (en) * 2019-10-10 2023-02-24 青岛大学附属医院 Navigation system
CN112381863B (en) * 2020-11-12 2022-04-05 中国电建集团江西省电力设计院有限公司 Ground laser point cloud method for forest scene
CN113155510B (en) * 2021-04-16 2024-05-10 伊达生物有限公司 Tissue cell segmentation sampling system and method
CN117428582B (en) * 2023-12-22 2024-03-15 泉州装备制造研究所 Machining method and medium for special-shaped workpiece

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5633951A (en) * 1992-12-18 1997-05-27 North America Philips Corporation Registration of volumetric images which are relatively elastically deformed by matching surfaces
US5937083A (en) * 1996-04-29 1999-08-10 The United States Of America As Represented By The Department Of Health And Human Services Image registration using closest corresponding voxels with an iterative registration process
CN1408102A (en) * 1999-07-26 2003-04-02 计算机化医疗系统公司 Automated image fusion/alignment system and method
US20030160970A1 (en) * 2002-01-30 2003-08-28 Anup Basu Method and apparatus for high resolution 3D scanning
CN1483999A (en) * 2003-08-15 2004-03-24 清华大学 Method and system for measruing object two-dimensiond surface outline

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5633951A (en) * 1992-12-18 1997-05-27 North America Philips Corporation Registration of volumetric images which are relatively elastically deformed by matching surfaces
US5937083A (en) * 1996-04-29 1999-08-10 The United States Of America As Represented By The Department Of Health And Human Services Image registration using closest corresponding voxels with an iterative registration process
CN1408102A (en) * 1999-07-26 2003-04-02 计算机化医疗系统公司 Automated image fusion/alignment system and method
US20030160970A1 (en) * 2002-01-30 2003-08-28 Anup Basu Method and apparatus for high resolution 3D scanning
CN1483999A (en) * 2003-08-15 2004-03-24 清华大学 Method and system for measruing object two-dimensiond surface outline

Also Published As

Publication number Publication date
CN1684105A (en) 2005-10-19

Similar Documents

Publication Publication Date Title
CN1312633C (en) Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
Zebedin et al. Fusion of feature-and area-based information for urban buildings modeling from aerial imagery
Pfeifer et al. LiDAR data filtering and DTM generation
Baillard et al. A plane-sweep strategy for the 3D reconstruction of buildings from multiple images
Csanyi et al. Improvement of lidar data accuracy using lidar-specific ground targets
Morgan et al. Automatic building extraction from airborne laser scanning data
Forlani et al. C omplete classification of raw LIDAR data and 3D reconstruction of buildings
Werner et al. New techniques for automated architectural reconstruction from photographs
Ding et al. Automatic registration of aerial imagery with untextured 3d lidar models
CN111444872B (en) Method for measuring geomorphic parameters of Danxia
Hammoudi et al. Extracting wire-frame models of street facades from 3D point clouds and the corresponding cadastral map
CN113916130B (en) Building position measuring method based on least square method
McIntosh et al. Integration of laser-derived DSMs and matched image edges for generating an accurate surface model
Stal et al. Test case on the quality analysis of structure from motion in airborne applications
CN112446844A (en) Point cloud feature extraction and registration fusion method
Milde et al. Building reconstruction using a structural description based on a formal grammar
Zhao et al. A robust method for registering ground-based laser range images of urban outdoor objects
Zhang et al. Automatic terrain extraction using multiple image pair and back matching
Tournaire et al. Towards a sub-decimetric georeferencing of groundbased mobile mapping systems in urban areas: Matching ground-based and aerial-based imagery using roadmarks
Monnier et al. Registration of terrestrial mobile laser data on 2D or 3D geographic database by use of a non-rigid ICP approach.
Riveiro et al. Automatic creation of structural models from point cloud data: the case of masonry structures
CN112907567B (en) SAR image ordered artificial structure extraction method based on spatial reasoning method
Thiele et al. Building reconstruction from InSAR data by detail analysis of phase profiles
DATA SURFACE MODELLING OF URBAN 3D OBJECTS FROM

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070425

Termination date: 20140413