CN111899291A - Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition - Google Patents

Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition Download PDF

Info

Publication number
CN111899291A
CN111899291A CN202010778539.5A CN202010778539A CN111899291A CN 111899291 A CN111899291 A CN 111899291A CN 202010778539 A CN202010778539 A CN 202010778539A CN 111899291 A CN111899291 A CN 111899291A
Authority
CN
China
Prior art keywords
point cloud
scene
point
registration
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010778539.5A
Other languages
Chinese (zh)
Inventor
葛旭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Center Of Digital City Engineering
Southwest Jiaotong University
Original Assignee
Shenzhen Research Center Of Digital City Engineering
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Center Of Digital City Engineering, Southwest Jiaotong University filed Critical Shenzhen Research Center Of Digital City Engineering
Priority to CN202010778539.5A priority Critical patent/CN111899291A/en
Publication of CN111899291A publication Critical patent/CN111899291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to an automatic registration method of urban point cloud from coarse to fine based on multi-source dimension decomposition, which combines the characteristics of an urban scene scanning object and the working characteristics of a ground scanning instrument, provides a dimension decomposition method, and realizes the simultaneous reduction of data dimension and parameter dimension, the efficient identification and extraction of characteristics and the efficient convergence of iterative solution under the support of the dimension decomposition method, thereby breaking through the bottleneck of registration accuracy and speed in the current large-scale urban point cloud scene registration; the method fully utilizes the capability of point cloud to remold the scene three-dimensional structure, considers the specific attributes of the problem, provides an object-oriented registration scheme, and provides a new idea for the development of the point cloud registration problem; meanwhile, the method does not need to manually identify the scene, provide a priori initial solution value and subsequently select points through human-computer interaction, is a set of full-automatic method from the input of original data to the acquisition of a three-dimensional refined point cloud data model, and is beneficial to reducing the expenditure of time, manpower and material resources in actual production.

Description

Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic registration method of urban point cloud from coarse to fine based on multi-source dimension decomposition.
Background
The development of digital cities cannot be separated from virtual city environment construction technologies such as three-dimensional land space management, expression and visualization. The technology is the basis for comprehensively supporting urban information perception and interconnection and 'digital' application such as VR/AR, and the like, so that a large-scale, fine and semantic-rich three-dimensional urban model becomes a necessary condition for smart city construction. The point cloud data has become a standardized data type for spatial data processing because of its unique advantages over the traditional measurement data, i.e., the data volume is abundant and the three-dimensional spatial coordinates of the scanned object can be directly obtained. Therefore, the processing of point cloud data becomes a research hotspot in the fields of photogrammetry, computer vision, computer graphics, robotics and the like, and abundant research results are obtained.
With the development of three-dimensional digital cities, the multi-source point cloud registration research has gained wide attention in recent years. The traditional method can encounter problems of different degrees when processing the current large-scale multi-source multi-scale complex structure point cloud, such as low feature point extraction recurrence rate and poor identification degree, and limitations of human cognitive intervention during feature description, which all cause negative effects on three-dimensional point cloud fusion registration and subsequent three-dimensional model reconstruction.
At present, the following problems exist in more point cloud matching methods: the existing commercial software is mainly completed in a manual point selection (including manual target identification and sequence establishment) mode for point cloud registration at any position, a large amount of manual participation is needed, the processing efficiency is low, and the cost is high; the sensors that can be used to the point cloud collection are more at present, and the scene is also comparatively abundant (for example, indoor, city, forest etc.), but lack the pertinence, all have the bottleneck in efficiency and precision.
Therefore, an object-oriented point cloud automatic registration method is provided to break through the bottleneck problems of registration accuracy and speed in current large-scale urban point cloud scene registration.
Disclosure of Invention
In view of this, an object of the present application is to provide an automatic coarse-to-fine urban point cloud registration method based on multi-source dimension decomposition, so as to improve the efficiency of current large-scale urban point cloud fusion registration and three-dimensional reconstruction, improve the registration accuracy, and reduce the costs of manpower and material resources.
In order to achieve the above object, the present application provides the following technical solutions.
The method for automatically registering the urban point cloud from coarse to fine based on multi-source dimension decomposition comprises the following steps:
101. carrying out ground scanning on an urban scene, collecting point cloud data, wherein the scanning of two adjacent sites needs to have an artificial building overlapping part;
102. preprocessing original point cloud data, eliminating noise points in the point cloud data, vertically projecting spatial three-dimensional point cloud data onto a plane with the normal direction being the zenith direction, and setting a point cloud density threshold rho0Extracting the side elevation of the building by using the threshold value to the projected two-dimensional plane point cloud;
103. linear detection and segmentation of the side elevation projection point cloud on a two-dimensional plane are realized, solution of kappa parameters and horizontal translation parameters in the (x, y) direction is realized by using paired straight line segments, and two-dimensional registration of a scene is realized;
104. rapidly calculating the height difference by using the ground point overlapping part; obtaining N ground corresponding areas randomly, and then obtaining the average value of the height difference as a translation value delta z in the vertical direction;
105. step 104 is completed for each group of solutions of the candidate set obtained in step 103, a scene random subset is used for carrying out nearest point test and inspection, the optimal solution meeting the conditions is selected as a final solution, and two adjacent point cloud scenes complete coarse registration;
106. and combining a large amount of face information on the building, and realizing fine registration by taking the face as a primitive to obtain a scene fine registration result.
Preferably, the step 101 of performing ground scanning on the urban scene includes a ground laser scanner and a mobile laser measuring vehicle.
Preferably, the ground laser scanner is supported by a tripod frame, and carries out leveling operation on the tripod frame, and the influence of the inclination angle of the ground laser scanner can be ignored in the coarse registration of the scene.
Preferably, the installation angle of the laser lens of the mobile laser measuring vehicle is fixed and unchangeable in the whole scanning operation, and the integral transformation of the point cloud is realized through hardware information.
Preferably, the step 103 is specifically as follows:
extracting straight line information from the planar point cloud by using a sampling consistency method (RANSAC), and finally realizing the remodeling of a two-dimensional scene by using a straight line;
randomly extracting two non-parallel straight line segments n in scene AaAnd maThen find the matching straight line segment n in scene BbAnd mbAnd form a pair of paired line pairs { na,nbAnd { m }a,mb};
After obtaining two sets of paired straight line segments, solving the rotation angle k to make na//nbWhile satisfying ma//mbAfter the rotation is completed, p is obtainedaAnd pbOf the Euclidean distance between, i.e. daFurther decomposing the translation parameters into x and y directions to obtain horizontal translation in the translation parameters, namely delta x and delta y; according to the theoretical support of the RANSAC solving platform, a maximum solving time K can be obtained;
in the resolving process, for each set of linear pairing sets meeting the conditions, a set of solutions is obtained, namely { kappa, delta x, delta y }i(ii) a Rotationally translating two-dimensional straight line segments in the scene A into the scene B by using each group of solutions; for any one straight line segment l 'within scene A after rotation'aFinding a straight line segment parallel to the B scene in the scene, and matching the straight line segment with l'aShould be less than a distance threshold; every time a matching straight line meeting the condition is found, the matching straight line is quantitatively scored, and all the matching straight lines are matchedAnd (3) adding the scores of the lines to obtain the scores of the solutions in the group, recording the solution alpha before the score as a candidate set of a final solution, and optimizing all the corresponding straight line segment pairs of each solution group.
Preferably, it is determined as a pairing straight line pair, and the following condition is satisfied: the length difference of the corresponding line segments should not be greater than a given distance threshold; the angle difference between every two straight line segments is not greater than a given angle threshold; the height difference of the three-dimensional side vertical surface of the corresponding straight line segment is not greater than a given height threshold value; n isaAnd maThe resulting intersection point paAnd n isbAnd mbThe resulting intersection point pbShould not be greater than a given distance threshold, is associated with the actual two stations.
Preferably, the step 104 is specifically as follows: point q's neighborhood is searched for by a cylinder, under the same cylinder'AObtaining neighborhood N 'in corresponding point cloud A'qA(after the superscript mark is rotated), under the constraint of the neighborhood, a point cloud domain N can be obtained in the point cloud BqBIf there is no corresponding NqBThen the condition is not satisfied here.
Preferably, for point q'A,N'qAAnd NqBThe following conditions are provided: q's'AShould be less than the prior height of the scanning gantry; n'qAAnd NqBThe flatness is certain, the difference value of the normal vector directions is less than 2 degrees, and the deviation of the normal vector directions and the zenith direction is less than 10 degrees; n'qAAnd NqBNo contact point in the Z direction; n 'satisfying the above conditions'qAAnd NqBWill be marked as the ground corresponding area.
Preferably, said step 106 implements a fine registration, in particular as follows:
obtaining q 'by using point-to-point relation based on normal vector direction'ACorresponding point q in point cloud BBIs of point q'AAnd point qBAs a center, obtaining a sufficiently small neighborhood in each point cloud, and considering the neighborhood as a very small surface;
randomly acquiring N polar faces in a scene, calculating corresponding normal directions of the N polar faces, pointing all the normal directions to a scene central point, projecting the normal directions onto a three-dimensional Gaussian sphere according to directions to obtain normal direction distribution of the polar faces, and realizing homogenization while ensuring normal direction data diversity by using a uniform sampling technology on the basis;
firstly, obtaining attitude-fixing rotation parameters by solving an objective function, fixing the rotation parameters, and then obtaining translation parameters by solving the objective function, wherein the solution of the translation parameters means that the distance between corresponding surfaces reaches the minimum;
and when the iteration converges again, the fine registration reconstruction results of the two scenes can be obtained.
Preferably, the attitude determination rotation parameter
Figure BDA0002619366260000041
The objective function of (2) is as follows:
Figure BDA0002619366260000042
in the formula (I), the compound is shown in the specification,
Figure BDA0002619366260000043
and
Figure BDA0002619366260000044
respectively represent normal vectors of the paired polar faces;
the translation parameter
Figure BDA0002619366260000045
The objective function of (2) is as follows:
Figure BDA0002619366260000046
of formula (II) to (III)'qAIs of point q'AObtaining neighborhood, N, in corresponding point cloud AqBAnd obtaining a point cloud domain in the point cloud B.
The beneficial technical effects obtained by the invention are as follows:
1) the invention fully utilizes the capability of point cloud to remold the scene three-dimensional structure and considers the specific attributes of the problem, provides an object-oriented registration scheme and provides a new idea for the development of the point cloud registration problem. Specifically, the characteristics of urban scene scanning objects and the characteristics of ground laser scanning instruments (including mobile measurement systems) working are combined, a multi-source dimension decomposition method is provided, and simultaneous reduction of data dimensions and parameter dimensions, efficient identification and extraction of features and efficient convergence of iterative solution are realized under the support of the method, so that the bottleneck problem of registration accuracy and speed in current large-scale urban point cloud scene registration is broken through;
2) the method does not need to manually identify the scene, provide a priori initial solution value and select points through subsequent human-computer interaction, is a set of full-automatic method from the input of original data to the acquisition of a three-dimensional fine point cloud data model, and is beneficial to reducing the expenditure of time, manpower and material resources in actual production;
3) according to the method, the data dimension is reduced, the parameter search space is also reduced, the rough registration problem is solved from the traditional steel body transformation 6 parameter to the three-dimensional space matching problem, the number of the parameters is reduced to 1+3, the two-dimensional space and one-dimensional space matching problem is solved independently, and the resolving efficiency is improved remarkably;
4) based on the fact that urban buildings are main targets of urban scenes and ground point cloud data are rich in description of building side facades, the invention provides a concept of extracting two-dimensional lines of the building side facades, converts a data space from three dimensions to two dimensions, and obviously improves search efficiency; in the accurate registration process, a registration mode of the extremely small surface is provided according to the inherent characteristics of the building, and the direction quantity is projected to the three-dimensional Gaussian sphere, so that the homogenization of the conversion in each direction is realized while the direction diversity is ensured; the stability of establishing pairing information in method resolving is improved based on the characteristic expression and description of the building.
The foregoing description is only an overview of the technical solutions of the present application, so that the technical means of the present application can be more clearly understood and the present application can be implemented according to the content of the description, and in order to make the above and other objects, features and advantages of the present application more clearly understood, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flow chart of an automatic coarse-to-fine registration method for an urban point cloud based on multi-source dimension decomposition according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of the proper installation of a ground laser scanner in an embodiment of the present disclosure;
FIG. 3 is a neighborhood of N 'in an embodiment disclosed in this disclosure'qAAnd point cloud domain NqBA schematic view of the position of (a);
FIG. 4 is a top view of raw point cloud data when not registered in one embodiment of the present disclosure;
FIG. 5 is a three-dimensional effect graph of point cloud scene registration obtained in an embodiment of the present disclosure;
fig. 6 is a detail view of a point cloud scene registration three-dimensional effect obtained in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
Further, the present application may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion.
Example 1
As shown in the attached figure 1, the method for automatically registering the urban point cloud from coarse to fine based on the multi-source dimension decomposition comprises the steps of adjacent point cloud scene input, point cloud denoising, coarse registration, fine registration and three-dimensional point cloud scene registration result obtaining.
The rough registration comprises the steps of projecting to a two-dimensional plane, extracting straight lines based on point cloud density, pairing straight line segments, obtaining a local optimal solution set and solving altitude difference offset; the fine registration comprises the steps of producing corresponding pole faces by corresponding points, screening the pole faces by a pole face method, accurately determining the attitude of a scene and accurately solving translation parameters of the scene.
The specific operation comprises the following steps:
101. the method comprises the steps of carrying out ground scanning on a city scene, collecting point cloud data, wherein the scanning of two adjacent sites needs to have an artificial building overlapping part.
The ground scanning of the urban scene comprises a ground laser scanner and a mobile laser measuring vehicle.
The ground laser scanner is erected and supported through a tripod in a correct erection mode, and carries out leveling operation on the tripod, the influence of the inclination angle of the ground laser scanner can be ignored in the rough registration of a scene, as shown in an attached figure 2,
Figure BDA0002619366260000061
wherein, related to the leveling precision, the leveling precision of the current mainstream tripod can reach less than 1 degree.
The installation angle of the laser lens of the mobile laser measuring vehicle is fixed and unchangeable in the whole scanning operation, and the overall transformation of point cloud is realized through hardware information, such as: R.P, where R is a 4 x 4 rotation matrix and P is the original point cloud collected by the mobile laser measuring vehicle.
In the registration problem, the following steel body transformation problem is solved:
Figure BDA0002619366260000062
wherein r is defined by three Euler angles, i.e.
Figure BDA0002619366260000063
A 4 × 4 rotation matrix is formed, t being the translation parameter. In conjunction with the scanner set-up features described above, the search space for unknown parameters is reduced in the coarse registration problem from 6 parameters (external orientation elements: 3 pose parameters, 3 position parameters) to 4 parameters (external orientation elements: 1 pose parameters, 3 position parameters), i.e., r (0,0, κ).
102. Preprocessing original point cloud data, eliminating noise points in the point cloud data, vertically projecting spatial three-dimensional point cloud data onto a plane with the normal direction being the zenith direction, and setting a point cloud density threshold rho0And extracting the building side elevation from the projected two-dimensional plane point cloud by using the threshold, wherein the point cloud projection density characteristic of the building side elevation is very obvious.
Noise point and threshold setting:
establishing a kd-Tree for the three-dimensional point cloud, and searching K adjacent points of each point by using a K neighborhood technology, wherein the K (20) neighborhood of a point p is NpAnd simultaneously obtaining a corresponding distance set Dp={d1,d2…dkAnd if the distance mean value is larger than 1 meter (adjusted according to actual conditions, optional parameters), determining the point as noise.
After denoising, projecting the point cloud to a two-dimensional plane, establishing a two-dimensional kd-Tree, searching K adjacent points of each point, and calculating the distanceThe value is recorded as the point cloud density value. Clustering the point cloud density, judging the high-density area as a side elevation projection point, and setting a point cloud density threshold rho according to the value0
103. In the scanning of urban scenes, artificial buildings are main objects in the scenes, and ground scanning data is characterized in that the object side elevation of the building is described in detail, the linear detection and segmentation of the projection point cloud of the side elevation on a two-dimensional plane are realized, the solution of kappa parameters and horizontal translation parameters (x, y) in the direction is realized by using paired linear segments, and the two-dimensional registration of the scenes is realized.
The method comprises the following specific steps:
and extracting straight line information from the plane point cloud by using a sampling consistency method (RANSAC), and finally realizing the remodeling of the two-dimensional scene by using the straight line.
Randomly extracting two non-parallel straight line segments n in scene AaAnd maThen find the matching straight line segment n in scene BbAnd mbAnd form a pair of paired line pairs { na,nbAnd { m }a,mb}。
When searching and judging whether the matching is carried out, the following conditions are required to be met: the length difference of the corresponding line segments should not be greater than a given distance threshold; the angle difference between every two straight line segments is not greater than a given angle threshold; the height difference of the three-dimensional side vertical surface of the corresponding straight line segment is not greater than a given height threshold value; n isaAnd maThe resulting intersection point paAnd n isbAnd mbThe resulting intersection point pbShould not be greater than a given distance threshold, is associated with the actual two stations. When the above condition is satisfied, it is determined as a paired line pair.
After obtaining two sets of paired straight line segments, solving the rotation angle k to make na//nbWhile satisfying ma//mbAfter the rotation is completed, p is obtainedaAnd pbOf the Euclidean distance between, i.e. daFurther decomposing the translation parameters into x and y directions to obtain horizontal translation in the translation parameters, namely delta x and delta y; according to the theoretical support of the RANSAC solving platform, a maximum solving time K can be obtained. However, the straight line segments used in this embodiment are obtained by projection from the building side facade, and the number of straight line segments in the one-site cloud data is very limited (<100) And by judging the significance of the straight line segments (namely keeping the straight line segments with the length larger than a certain length), the data search dimensionality can be further reduced, and therefore K can be set to be the full combination of the number of the straight line segments quickly, namely
Figure BDA0002619366260000071
Where h is the number of straight line segments.
In the resolving process, for each set of linear pairing sets meeting the conditions, a set of solutions is obtained, namely { kappa, delta x, delta y }i(ii) a Rotationally translating two-dimensional straight line segments in the scene A into the scene B by using each group of solutions; for any one straight line segment l 'within scene A after rotation'aFinding a straight line segment parallel to the B scene in the B scene (after the superscript mark rotates), and matching the straight line segment with l'aThe distance of (2) should be less than a distance threshold (the distance threshold is precision control of two-dimensional matching, and can be generally set to be 2cm, and meanwhile, considering the existence of errors in measurement, the quantification of the parallel relation can be regarded as that the included angle of two straight line segments is less than 2 degrees); every time a matching straight line meeting the condition is found, the matching straight lines are quantitatively scored, and the scores of all the matching straight lines are added to obtain the score of the solution set, which is recorded as siAnd (> 0), recording the solution of alpha (3-5 suggestions) before scoring as a candidate set of a final solution, and optimizing each group of solutions by using all corresponding straight line segment pairs.
104. After step 103 is completed, the scene has already been subjected to two-dimensional registration, and returns to the three-dimensional space, and at this time, the registration of the scene only has the problem of height difference registration in the vertical direction. Aiming at the fact that a large amount of ground point cloud data exists in ground laser data, the height difference is quickly calculated by utilizing the ground point overlapping part; and after N ground corresponding areas are randomly obtained, the average value of the height difference is obtained and used as a translation value delta z in the vertical direction.
The method comprises the following specific steps: point q's neighborhood is searched for by a cylinder, under the same cylinder'AObtaining neighborhood N 'in corresponding point cloud A'qA(superscript identification)Rotated), a point cloud domain N may be obtained within point cloud B under the constraints of the neighborhoodqBIf there is no corresponding NqBThen the condition is not satisfied here.
Wherein, for point q'A,N'qAAnd NqBThe following conditions are provided: q's'AShould be less than the prior height of the scanning gantry; n'qAAnd NqBThe flatness is certain, the difference value of the normal vector directions is less than 2 degrees, and the deviation of the normal vector directions and the zenith direction is less than 10 degrees; n'qAAnd NqBNo contact point in the Z direction; as shown in figure 3, N 'satisfying the above condition'qAAnd NqBWill be marked as the ground corresponding area. Randomly acquiring N: (>10) And after the ground corresponds to the area, obtaining the average value of the height difference as a translation value delta z in the vertical direction.
105. And (5) finishing the operation in the step 104 aiming at each group of solutions of the candidate set obtained in the step 103, performing nearest point test and inspection by using the scene random subset, selecting the optimal solution meeting the conditions as a final solution, and finishing coarse registration of two adjacent point cloud scenes.
106. The calculation process is divided into two steps of attitude alignment and position alignment, and the fine registration is realized by taking the surface as a primitive by combining a large amount of surface information on the building, so that a scene registration result is obtained.
The method comprises the following specific steps:
obtaining q 'by using point-to-point relation based on normal vector direction'ACorresponding point q in point cloud BBIs of point q'AAnd point qBAs a center, a sufficiently small neighborhood is obtained in the respective point cloud, which is considered to be a very small surface.
Randomly acquiring N (>50, suggesting) polar facets in a scene, calculating corresponding normal directions, pointing all the normal directions to the center point of the scene, and projecting the normal directions onto a three-dimensional Gaussian sphere to obtain normal direction distribution of the polar facets. The method is uniformly adopted on the Gaussian sphere, so that the acting force in each direction is consistent when the posture is determined.
Firstly, obtaining attitude-fixing rotation parameters by solving an objective function, fixing the rotation parameters after the objective function is solved, and then obtaining translation parameters by solving the objective function, wherein the solution of the translation parameters means that the distance between corresponding surfaces is minimum.
Wherein the attitude-determining rotation parameter
Figure BDA0002619366260000091
The objective function of (2) is as follows:
Figure BDA0002619366260000092
in the formula (I), the compound is shown in the specification,
Figure BDA0002619366260000093
and
Figure BDA0002619366260000094
each representing a normal vector to the mating pole face.
The translation parameter
Figure BDA0002619366260000095
The objective function of (2) is as follows:
Figure BDA0002619366260000096
of formula (II) to (III)'qAIs of point q'AObtaining neighborhood, N, in corresponding point cloud AqBAnd obtaining a point cloud domain in the point cloud B.
In order to ensure that the translation parameters do not fall into the local minimum problem, it is proposed that the selection of the pole face should satisfy the diversity of the normal direction as much as possible.
And when the iteration converges again, the fine registration reconstruction results of the two scenes can be obtained.
As shown in fig. 4, the top view of the collected original point cloud data without registration is obtained by combining the characteristics of the urban scene scanning object and the working characteristics of the ground scanning instrument; by adopting the automatic registration method of the urban point clouds from coarse to fine based on the multi-source dimension decomposition in the embodiment, the obtained point cloud scene registration three-dimensional effect map is shown in fig. 5, and the obtained point cloud scene registration three-dimensional effect detail map is shown in fig. 6.
As can be seen from the attached drawings 5 and 6, the point cloud registration is high in accuracy, the final registration effect is good, and the three-dimensional effect graph after registration is closer to an actual scene. According to the automatic registration method of the urban point cloud from coarse to fine based on the multi-source dimension decomposition, the capability of the point cloud for three-dimensional structural remodeling of a scene is fully utilized, specific attributes of problems are considered, an object-oriented registration scheme is provided, and a new thought is provided for the development of the point cloud registration problem. Specifically, the characteristics of urban scene scanning objects and the characteristics of ground laser scanning instruments (including mobile measurement systems) working are combined, a multi-source dimension decomposition method is provided, simultaneous reduction of data dimensions and parameter dimensions, efficient identification and extraction of features and efficient convergence of iterative solution are achieved under the support of the method, and the bottleneck problem of registration accuracy and speed in current large-scale urban point cloud scene registration is broken through.
The automatic registration method for the urban point cloud from coarse to fine based on the multi-source dimension decomposition is a full-automatic method from original data input to three-dimensional fine point cloud data model acquisition, and is beneficial to reducing the expenditure of time, manpower and material resources in actual production, without manually identifying a scene, providing a priori initial solution value and subsequent human-computer interaction point selection.
The above description is only a preferred embodiment of the present invention, and it is not intended to limit the scope of the present invention, and various modifications and changes may be made by those skilled in the art. Variations, modifications, substitutions, integrations and parameter changes of the embodiments may be made without departing from the principle and spirit of the invention, which may be within the spirit and principle of the invention, by conventional substitution or may realize the same function.

Claims (10)

1. A coarse-to-fine automatic registration method for urban point cloud based on multi-source dimension decomposition is characterized by comprising the following steps:
101. carrying out ground scanning on an urban scene, collecting point cloud data, wherein the scanning of two adjacent sites needs to have an artificial building overlapping part;
102. preprocessing original point cloud data, eliminating noise points in the point cloud data, vertically projecting spatial three-dimensional point cloud data onto a plane with the normal direction being the zenith direction, and setting a point cloud density threshold rho0Extracting the side elevation of the building by using the threshold value to the projected two-dimensional plane point cloud;
103. linear detection and segmentation of the side elevation projection point cloud on a two-dimensional plane are realized, solution of kappa parameters and horizontal translation parameters in the (x, y) direction is realized by using paired straight line segments, and two-dimensional registration of a scene is realized;
104. rapidly calculating the height difference by using the ground point overlapping part; obtaining N ground corresponding areas randomly, and then obtaining the average value of the height difference as a translation value delta z in the vertical direction;
105. step 104 is completed for each group of solutions of the candidate set obtained in step 103, a scene random subset is used for carrying out nearest point test and inspection, the optimal solution meeting the conditions is selected as a final solution, and two adjacent point cloud scenes complete coarse registration;
106. and combining a large amount of face information on the building, and realizing fine registration by taking the face as a primitive to obtain a scene fine registration result.
2. The method for automatic coarse-to-fine registration of urban point clouds based on multi-source dimension decomposition according to claim 1, wherein the step 101 of performing ground scanning on urban scenes comprises a ground laser scanner and a mobile laser measuring vehicle.
3. The method for the coarse-to-fine automatic registration of the urban point cloud based on the multi-source dimension decomposition according to claim 2, wherein the ground laser scanner is supported by a tripod frame and performs leveling operation on the tripod frame, and the influence of the inclination angle of the ground laser scanner is negligible in the scene coarse registration.
4. The method for automatically registering the urban point cloud from coarse to fine based on the multi-source dimension decomposition according to claim 2, wherein the installation angle of a laser lens of the mobile laser measuring vehicle is fixed and unchangeable in the whole scanning operation, and the whole transformation of the point cloud is realized through hardware information.
5. The method for automatically registering the urban point cloud from coarse to fine according to claim 1, wherein the step 103 is as follows:
extracting straight line information from the planar point cloud by using a sampling consistency method (RANSAC), and finally realizing the remodeling of a two-dimensional scene by using a straight line;
randomly extracting two non-parallel straight line segments n in scene AaAnd maThen find the matching straight line segment n in scene BbAnd mbAnd form a pair of paired line pairs { na,nbAnd { m }a,mb};
After obtaining two sets of paired straight line segments, solving the rotation angle k to make na//ndWhile satisfying ma//mbAfter the rotation is completed, p is obtainedaAnd pbOf the Euclidean distance between, i.e. daFurther decomposing the translation parameters into x and y directions to obtain horizontal translation in the translation parameters, namely delta x and delta y; according to the theoretical support of the RANSAC solving platform, a maximum solving time K can be obtained;
in the resolving process, for each set of linear pairing sets meeting the conditions, a set of solutions is obtained, namely { kappa, delta x, delta y }i(ii) a Rotationally translating two-dimensional straight line segments in the scene A into the scene B by using each group of solutions; for any one straight line segment l 'within scene A after rotation'aFinding a straight line segment parallel to the B scene in the scene, and matching the straight line segment with l'aShould be less than a distance threshold; every time a matching straight line meeting the condition is found, the matching straight lines are quantitatively scored, the scores of all the matching straight lines are added to obtain the score of the solution group, the solution alpha before scoring is recorded as a candidate set of the final solution, and for each solution group, all the corresponding straight line segments are usedAnd optimizing.
6. The method for automatically registering the urban point cloud from coarse to fine according to claim 5, wherein the method is determined as a pairing straight line pair and satisfies the following conditions: the length difference of the corresponding line segments should not be greater than a given distance threshold; the angle difference between every two straight line segments is not greater than a given angle threshold; the height difference of the three-dimensional side vertical surface of the corresponding straight line segment is not greater than a given height threshold value; n isaAnd maThe resulting intersection point paAnd n isbAnd mbThe resulting intersection point pbShould not be greater than a given distance threshold, is associated with the actual two stations.
7. The method for automatically registering the urban point cloud from coarse to fine according to the claim 1, wherein the step 104 is as follows: point q's neighborhood is searched for by a cylinder, under the same cylinder'AObtaining neighborhoods within corresponding point clouds A
Figure FDA0002619366250000021
(after the superscript identifier is rotated), a point cloud domain can be obtained within the point cloud B under the constraint of the neighborhood
Figure FDA0002619366250000022
If there is no correspondence
Figure FDA0002619366250000023
The condition is not satisfied here.
8. The method of claim 7, wherein for point q'A
Figure FDA0002619366250000024
And
Figure FDA0002619366250000025
the following conditions are present: q's'AShould be less than the prior height of the scanning gantry;
Figure FDA0002619366250000026
and
Figure FDA0002619366250000027
the flatness is certain, the difference value of the normal vector directions is less than 2 degrees, and the deviation of the normal vector directions and the zenith direction is less than 10 degrees;
Figure FDA0002619366250000028
and
Figure FDA0002619366250000029
no contact point in the Z direction; the above conditions are satisfied, and the method is characterized in that,
Figure FDA00026193662500000210
and
Figure FDA00026193662500000211
will be marked as the ground corresponding area.
9. The method for automatically registering the urban point cloud from coarse to fine according to claim 1, wherein the step 106 is implemented by fine registration, specifically as follows:
obtaining q 'by using point-to-point relation based on normal vector direction'ACorresponding point q in point cloud BBIs of point q'AAnd point qBAs a center, obtaining a sufficiently small neighborhood in each point cloud, and considering the neighborhood as a very small surface;
randomly acquiring N polar faces in a scene, calculating corresponding normal directions of the N polar faces, pointing all the normal directions to a scene central point, and projecting the normal directions onto a three-dimensional Gaussian sphere to obtain normal direction distribution of the polar faces;
firstly, obtaining attitude-fixing rotation parameters by solving an objective function, fixing the rotation parameters, and then obtaining translation parameters by solving the objective function, wherein the solution of the translation parameters means that the distance between corresponding surfaces reaches the minimum;
and when the iteration converges again, the fine registration reconstruction results of the two scenes can be obtained.
10. The method of claim 9, wherein the pose rotation parameter is a coarse-to-fine auto-registration parameter for the urban point cloud based on multi-source dimension decomposition
Figure FDA0002619366250000031
The objective function of (2) is as follows:
Figure FDA0002619366250000032
in the formula (I), the compound is shown in the specification,
Figure FDA0002619366250000033
and
Figure FDA0002619366250000034
respectively represent normal vectors of the paired polar faces;
the translation parameter
Figure FDA0002619366250000035
The objective function of (2) is as follows:
Figure FDA0002619366250000036
in the formula (I), the compound is shown in the specification,
Figure FDA0002619366250000037
is of point q'AA neighborhood is obtained within the corresponding point cloud a',
Figure FDA0002619366250000038
and obtaining a point cloud domain in the point cloud B.
CN202010778539.5A 2020-08-05 2020-08-05 Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition Pending CN111899291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010778539.5A CN111899291A (en) 2020-08-05 2020-08-05 Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010778539.5A CN111899291A (en) 2020-08-05 2020-08-05 Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition

Publications (1)

Publication Number Publication Date
CN111899291A true CN111899291A (en) 2020-11-06

Family

ID=73247311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010778539.5A Pending CN111899291A (en) 2020-08-05 2020-08-05 Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition

Country Status (1)

Country Link
CN (1) CN111899291A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399550A (en) * 2022-01-18 2022-04-26 中冶赛迪重庆信息技术有限公司 Automobile saddle extraction method and system based on three-dimensional laser scanning
CN116740156A (en) * 2023-08-10 2023-09-12 西南交通大学 Registration method of arbitrary pose construction element based on Gaussian sphere and principal plane distribution

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232388A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data by creation of filtered density images
CN103679741A (en) * 2013-12-30 2014-03-26 北京建筑大学 Method for automatically registering cloud data of laser dots based on three-dimensional line characters
CN104395932A (en) * 2012-06-29 2015-03-04 三菱电机株式会社 Method for registering data
DE102016116572A1 (en) * 2016-09-05 2018-03-08 Navvis Gmbh Alignment of point clouds to the modeling of interiors
CN110009745A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 According to plane primitive and model-driven to the method for data reduction plane
CN110443836A (en) * 2019-06-24 2019-11-12 中国人民解放军战略支援部队信息工程大学 A kind of point cloud data autoegistration method and device based on plane characteristic
US20200020072A1 (en) * 2018-07-10 2020-01-16 Raytheon Company Multi-source image fusion
US20200020116A1 (en) * 2018-07-10 2020-01-16 Raytheon Company Synthetic image generation from 3d-point cloud

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232388A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data by creation of filtered density images
CN104395932A (en) * 2012-06-29 2015-03-04 三菱电机株式会社 Method for registering data
CN103679741A (en) * 2013-12-30 2014-03-26 北京建筑大学 Method for automatically registering cloud data of laser dots based on three-dimensional line characters
DE102016116572A1 (en) * 2016-09-05 2018-03-08 Navvis Gmbh Alignment of point clouds to the modeling of interiors
US20200020072A1 (en) * 2018-07-10 2020-01-16 Raytheon Company Multi-source image fusion
US20200020116A1 (en) * 2018-07-10 2020-01-16 Raytheon Company Synthetic image generation from 3d-point cloud
CN110009745A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 According to plane primitive and model-driven to the method for data reduction plane
CN110443836A (en) * 2019-06-24 2019-11-12 中国人民解放军战略支援部队信息工程大学 A kind of point cloud data autoegistration method and device based on plane characteristic

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIAOSHUI HUANG ET AL: "A Coarse-to-Fine Algorithm for Matching and Registration in 3D Cross-Source Point Cloud", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY》, vol. 28, no. 10, pages 2965 - 2977, XP011701924, DOI: 10.1109/TCSVT.2017.2730232 *
孙成: "三维激光点云特征提取及配准算法改进", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 1, pages 1 - 65 *
彭蹦 等: "一种多模式融合的激光点云配准算法", 《激光与红外》, vol. 50, no. 4, pages 396 - 402 *
梁栋 等: "基于平面基元组的建筑物场景点云自动配准方法", 《武汉大学学报(信息科学版)》, vol. 41, no. 12, pages 1613 - 1618 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399550A (en) * 2022-01-18 2022-04-26 中冶赛迪重庆信息技术有限公司 Automobile saddle extraction method and system based on three-dimensional laser scanning
CN116740156A (en) * 2023-08-10 2023-09-12 西南交通大学 Registration method of arbitrary pose construction element based on Gaussian sphere and principal plane distribution
CN116740156B (en) * 2023-08-10 2023-11-03 西南交通大学 Registration method of arbitrary pose construction element based on Gaussian sphere and principal plane distribution

Similar Documents

Publication Publication Date Title
WO2021232463A1 (en) Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
KR102125959B1 (en) Method and apparatus for determining a matching relationship between point cloud data
CN110570428B (en) Method and system for dividing building roof sheet from large-scale image dense matching point cloud
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN107092877B (en) Remote sensing image roof contour extraction method based on building base vector
CN110322511B (en) Semantic SLAM method and system based on object and plane features
CN111311650B (en) Point cloud data registration method, device and storage medium
CN109631855A (en) High-precision vehicle positioning method based on ORB-SLAM
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
CN109919237B (en) Point cloud processing method and device
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
CN111915517B (en) Global positioning method suitable for RGB-D camera under indoor illumination unfavorable environment
WO2023060632A1 (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
CN114283250A (en) High-precision automatic splicing and optimizing method and system for three-dimensional point cloud map
CN112396641A (en) Point cloud global registration method based on congruent two-baseline matching
CN114549956A (en) Deep learning assisted inclined model building facade target recognition method
Liu et al. Image-translation-based road marking extraction from mobile laser point clouds
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN116385420A (en) Method, system, device and storage medium for determining area size
CN112070800A (en) Intelligent vehicle positioning method and system based on three-dimensional point cloud polarization map representation
CN110927743A (en) Intelligent vehicle positioning method based on multi-line laser point cloud polarization representation
CN107610216B (en) Particle swarm optimization-based multi-view three-dimensional point cloud generation method and applied camera
CN113012206B (en) Airborne and vehicle-mounted LiDAR point cloud registration method considering eave characteristics
CN111899291A (en) Automatic registration method for coarse-to-fine urban point cloud based on multi-source dimension decomposition
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination