CN117541634A - Point cloud registration method based on error variable model description under linear characteristic constraint - Google Patents

Point cloud registration method based on error variable model description under linear characteristic constraint Download PDF

Info

Publication number
CN117541634A
CN117541634A CN202410027444.8A CN202410027444A CN117541634A CN 117541634 A CN117541634 A CN 117541634A CN 202410027444 A CN202410027444 A CN 202410027444A CN 117541634 A CN117541634 A CN 117541634A
Authority
CN
China
Prior art keywords
quaternion
formula
straight line
parameters
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410027444.8A
Other languages
Chinese (zh)
Other versions
CN117541634B (en
Inventor
王永波
郑南山
卞正富
张秋昭
杨敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202410027444.8A priority Critical patent/CN117541634B/en
Publication of CN117541634A publication Critical patent/CN117541634A/en
Application granted granted Critical
Publication of CN117541634B publication Critical patent/CN117541634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Complex Calculations (AREA)
  • Liquid Crystal (AREA)

Abstract

The invention provides a point cloud registration method based on error variable model description under linear characteristic constraint, which comprises the following steps: step 1, extracting linear features on the surface of a building or a structure based on collected LiDAR point cloud data, and describing the linear features in an extracted three-dimensional space by utilizing Plucker coordinates; step 2, realizing mathematical description and expression of a linear characteristic space similarity transformation process by using dual quaternions; and 3, selecting linear features as primitives of point cloud registration, describing errors of the linear features extracted by each adjacent LiDAR measuring station by using an EIV model, establishing a space similarity transformation parameter solving model, and realizing registration and fusion of point cloud data between the adjacent two LiDAR measuring stations based on the calculated space similarity transformation parameters. The invention describes the extraction error of the registration primitive by using the EIV model, and effectively ensures the asymptotic unbiasedness of the registration parameter solving result.

Description

Point cloud registration method based on error variable model description under linear characteristic constraint
Technical Field
The invention belongs to the field of computer three-dimensional data processing, and particularly relates to a point cloud registration method based on error variable model description under linear feature constraint.
Background
With the advent of LiDAR technology and its successful application in production, registration of LiDAR point clouds has been of great interest to researchers as a necessary means to achieve adjacent station LiDAR point cloud fusion. The essence of point cloud registration is to seek and establish the corresponding relationship of homonymous features of point clouds between adjacent measuring stations, and solve 7 parameters for describing the relative position relationship between coordinate references of the adjacent measuring stations based on a space similarity transformation model (which can be simplified into a rigid transformation model when scaling is not considered), namely: rotation angles of three coordinate axes x, y and z△α△β△γ) Three coordinate translation amounts△X△YZ) and scale factorμAnd further, unified description and expression of the coordinate reference are realized.
According to different selection of registration primitives, the existing LiDAR point cloud registration algorithm can be divided into 4 classes: liDAR point cloud registration based on homonymous point matching, liDAR point cloud registration based on homonymous straight line feature matching, liDAR point cloud registration based on homonymous plane feature matching, liDAR point cloud registration based on point, line and surface feature joint constraint, and the like. Currently, most of the existing research results focus on LiDAR point cloud registration based on homonymous point matching, however, point features are simply selected as primitives of registration, and in some cases, sufficient condition constraints cannot be provided for solving registration parameters due to shielding. It should be noted that, compared with the point feature, under the same sampling condition, the extraction of the straight line and the plane feature is less affected by the sampling density, and the precision is obviously better than that of the point feature. However, up to now, the straight line feature and the plane feature have not been widely used in the LiDAR point cloud registration, and the reason is that the mathematical expression of the straight line and the plane feature in the three-dimensional space is usually implemented by means of the combination of the direction vector (normal vector) and the passing point, the selection of the passing point is different, the expression forms are different, and the implementation difficulty and complexity of the algorithm are increased.
According to different description modes of feature sampling errors in the registration process, the existing LiDAR point cloud registration algorithm can be divided into an algorithm based on least squares constraint and an algorithm based on overall least squares constraint. The former describes sampling Errors of the registration primitives by using a Gauss-Markov model, and the latter describes sampling Errors by using an EIV (error-In-variability) model. Due to the influence of the sampling and feature extraction processes, errors may exist in all the registration primitives, so that the influence of the errors of all the registration primitives on the result needs to be comprehensively considered in the solving process of the registration parameters, however, the Gauss-Markov model only describes the errors existing in one of the two sets of registration primitives, which obviously is not in line with the actual situation. In comparison, the EIV model takes all errors of the two sets of registration primitives into consideration, and ensures asymptotic unbiasedness of registration parameters.
In summary, more types of features such as straight lines or planes are necessary to be introduced to establish corresponding relations of homonymous features between adjacent measuring stations, more condition constraints are provided for LiDAR point cloud registration, the registration problem of adjacent measuring station LiDAR point clouds under complex conditions is solved, high-precision fusion of multi-measuring station LiDAR point clouds is achieved, and reliable data assurance is provided for rapid and high-fidelity reconstruction of geospatial entities and environmental information thereof.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, the invention provides a point cloud registration method based on error variable (EIV) model description under linear characteristic constraint, which comprises the following steps:
step 1, based on collected LiDAR point cloud data, linear features on the surface of a building or a structure are extracted through man-machine interaction, plane fitting and intersection, and the extracted linear features in a three-dimensional space are described by utilizing Plucker coordinates;
step 2, realizing mathematical description and expression of a linear characteristic space similarity transformation process by using dual quaternions;
and 3, describing errors of the linear features extracted by each adjacent LiDAR measuring station by using an EIV model by taking registration and fusion of point cloud data between the adjacent two LiDAR measuring stations as a starting point of research, establishing a space similarity transformation parameter solving model based on constraint of a total least squares criterion, and realizing registration and fusion of the point cloud data between the adjacent two LiDAR measuring stations based on the calculated space similarity transformation parameters.
The step 1 comprises the following steps:
step 1-1, establishing a corresponding relation between a unit quaternion and a rotation matrix;
And step 1-2, describing the linear feature in the three-dimensional space by utilizing the Plucker coordinate, and carrying out regularization treatment on the Plucker coordinate of the linear feature.
Step 1-1 includes: quaternionIs composed of a real partq 0 And three imaginary partsq 1q 2q 3 A four-element group is formed:
(1)
wherein,,/>、/>、/>respectively represent the three-dimensional space rectangular coordinate system and the coordinate systemxA shaft(s),yA shaft(s),zA unit vector having the same axial direction;
setting a rotation passing through an origin in three-dimensional spaceThe unit direction vector of the shaft isRotation angle around rotation axisθIs expressed as a quaternion:
(2)
wherein T represents a matrix transpose;、/>、/>respectively shown inxA shaft(s),yA shaft(s),zProjection length in the axial direction;
following the expression of equation (1), the quaternion representing the rotation transformation in three-dimensional space is expressed as:
(3)
wherein,corresponds to +.>,/>Corresponds to +.about.in formula (1)>
Known three-dimensional space feature sampling pointCorresponding quaternion->Represented as,/>Is transformed into->The process of (1) is expressed as follows:
(4)
wherein,、/>、/>respectively represent sampling points +.>A kind of electronic devicexCoordinates of,yCoordinates and method for producing the samezCoordinates of->Representing quaternion +.>Corresponding to the expression:
(5)
according to the operation rule of the quaternion, the formula (4) is expressed as a matrix:
(6)
Wherein,,/>
according to a rotation matrixRAnd unit quaternionIs used for the corresponding relation of the (a),Rthe expression is as follows:
(7)
wherein,Iis a unit matrix of 3*3 and is formed by a matrix,representing an antisymmetric matrix expressed in the form of
The step 1-2 comprises the following steps: any two points in a given spaceAnd->,/>、/>、/>Is thatCoordinates of->、/>、/>Is->The direction vector and moment of the straight line are calculated according to the following formula:
(8)
wherein,direction vector representing straight line, +.>Representing the moment of a straight line>Perpendicular to a plane containing the straight line and passing through the origin;
if two other different points are taken on the straight lineAnd->According to linear coordinate substitution, the method is respectively marked as:
(9)
wherein,and->Is a multiplication factor, and the two are usually not equal, and the direction vector of the straight line is obtained based on the formula (9)And moment->The method comprises the following steps of:
(10)
analysis shows that the linear direction vector and moment expressed by the formula (10) have the same scaling factor as that of the formula (8)In order to avoid inconsistent Plucker coordinates of the same straight line caused by non-uniform end point sampling, the regular processing is performed on the Plucker coordinates of the straight line:
dividing all elements of the Plucker linear coordinate by a module of the linear direction vector, and obtaining a result called regularized Plucker linear coordinate;
According to the relation between the vector and the unit quaternion, the regularized Plucker coordinate is expanded to 8 dimensions according to the following formula:
(11)
wherein,for regularized Plucker coordinates extended to the expression after 8 dimensions, ++>Quaternion corresponding to direction vector of straight line, the specific expression is +.>,/>For regularizing the moment in Plucker coordinates +.>The corresponding quaternion is expressed in terms of +.>
The step 2 comprises the following steps:
step 2-1, describing a space similarity transformation process of the linear characteristics by using dual quaternions;
and 2-2, describing a space similarity transformation process of the linear characteristics by using vector algebra.
Step 2-1 includes: dual quaternionThe method is obtained by combining quaternion and dual, and has the following forms:
(12)
wherein,and->Are quaternions, respectively called the real part and the dual part of the dual quaternion;
the dual quaternion and quaternion also have a similar representation:
(13)
wherein the dual vectorAnd dual angle->The expression is as follows:
(14)
wherein,a unit direction vector representing the rotation axis and translation; />Indicating the rotation axis passing byIs a position of (2);dindicating>A translation distance;
the spatial rigid transformation process using dual quaternion representation is: first, the original coordinate system is followed Is shifted by a distancedThe coordinate system is then followed by +.>And the direction vector is +.>Is rotated by an angle +.>Thereby completing the spatial rigid transformation of the feature;
when representing rigid body transformation operations in space using dual quaternions, the following two conditions need to be satisfied:
(15)
the similar transformation process of the straight line L in the three-dimensional space is expressed as follows by using dual quaternions:
(16)
wherein,、/>mathematical expressions corresponding to straight lines L before and after spatial similarity transformation are respectively represented, and the mathematical expressions are +.>Representing the corresponding dual quaternion of the spatial rigid transformation, +.>Is->Conjugation of->The specific expression form of (2) is as follows:
(17)
considering the scale difference between the two coordinate systems before and after transformation, substituting the linear characteristic expression corresponding to the formula (11) and the dual quaternion expression corresponding to the formula (12) into the formula (16), and expressing the real part and the dual part of the Plucker coordinate to obtain:
(18)
wherein,and->Unit direction vectors respectively representing the straight line characteristics before and after the spatial similarity transformation, < >>And->Moment representing the straight line characteristics before and after the spatial similarity transformation, respectively,/->Scaling coefficients representing the linear features before and after the spatial similarity transformation,corresponding dual quaternion->Real part of->Corresponding dual quaternion- >Is (are) coupled with (are) are (are) added>And->Representing quaternion +.>And->A corresponding conjugated quaternion;
and (3) making:then->Conjugated expression of->The method comprises the following steps:
(19)
wherein,representing translation vector in spatial similarity transformation>Corresponding quaternion and satisfy the relation;/>Representing quaternion +.>Conjugation of (2);
further expressed as:
(20)
will beSubstituting formula (20) to obtain:
(21)。
step 2-2 includes: using the correspondence between the dual quaternion and the rotation matrix and translation vector in the spatial similarity transformation, equation (21) is expressed as:
(22)
wherein,、/>unit direction vectors respectively representing the same-name straight line characteristics before and after the space similarity transformation; />、/>Respectively representing moments of the same-name straight line characteristics before and after the space similarity transformation; />、/>、/>Respectively representing the scaling coefficient, the rotation matrix and the translation vector corresponding to the space similarity transformation.
The step 3 comprises the following steps:
step 3-1, describing errors of straight line characteristics by using an EIV model:
considering the extraction error of the linear characteristic direction vector, describing the spatial similarity transformation process of the unit direction vector in the formula (22) based on the EIV model, and obtaining the following steps:
(23)
wherein,、/>direction vector representing straight line respectively->、/>Corresponding extraction errors;
intermediate parameters of the orderDeveloping the formula (23) to obtain:
(24)
Wherein,representation matrix->Is the initial value of (2);
intermediate parameters of the orderIntermediate parameter->Intermediate parametersThen formula (24) is further expressed as:
(25)
intermediate parameters of the orderIntermediate parameter->Formula (25) is rewritten as:
(26)
wherein the intermediate parameter,/>Representation->Is determined by the method;
step 3-2, solving the space similarity transformation parameters based on the constraint of the least square criterion; according to the method, linear features are selected as primitives of LiDAR point cloud registration, the linear features in a three-dimensional space are described by using Plucker coordinates, unit quaternions are introduced as basic operators of space rotation transformation description, and a mathematical expression of a linear feature space similarity transformation process is constructed; on the basis, an EIV model is adopted to describe errors of registration primitives, parameters of homonymous linear features after LiDAR point cloud registration are used as constraint conditions, an objective function for solving spatial similar transformation parameters based on linear feature constraint is constructed based on constraint of an overall least squares criterion, and iterative solution of adjacent LiDAR point cloud registration parameters of a measuring station is achieved through extremum analysis of the function. And finally, verifying the correctness and the effectiveness of the algorithm through two groups of ground LiDAR point cloud data acquired on site.
The beneficial effects are that: compared with the point characteristics, the linear characteristics extracted based on the LiDAR point cloud have the characteristics of high precision, convenient extraction and the like, and particularly, when manual mark points are difficult to effectively set in the data acquisition process, the method provided by the invention can provide great convenience for fusion of the LiDAR point clouds of multiple measuring stations; based on the straight line characteristics in the Plucker coordinate expression space, the judgment of the consistency (complete coincidence) of two straight lines in the space can be directly completed by means of parameter comparison, so that the programming implementation of an algorithm is effectively simplified; more importantly, the error of primitive extraction is described by using the EIV model, so that asymptotic unbiasedness of a registration parameter solving result is effectively ensured theoretically.
Drawings
The foregoing and/or other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings and detailed description.
FIG. 1 is a schematic representation of a linear representation in three-dimensional space based on Plucker coordinates.
Fig. 2 is a schematic diagram of rotation and translation of a dual quaternion representation.
Fig. 3 is a schematic diagram of a medium error of an algorithm operation result calculated based on a direction vector deviation of a straight line feature (red represents a least squares algorithm, and blue represents an overall square algorithm).
Fig. 4 is a schematic diagram of a medium error of an algorithm operation result calculated based on a moment deviation of a straight line feature (red represents a least squares algorithm, and blue represents an overall square algorithm).
Fig. 5 is a schematic diagram of a medium error of an algorithm operation result obtained by the integrated calculation of the direction vector deviation and the moment deviation based on the straight line feature (red represents the least squares algorithm, and blue represents the overall squares algorithm).
Fig. 6 is a visual effect diagram before registration of elevation LiDAR point clouds acquired for the same building based on different angles.
Fig. 7 is a visual effect diagram of a facade LiDAR point cloud acquired from the same building based on different angles after registration.
Detailed Description
The invention provides a point cloud registration method based on error variable model description under linear characteristic constraint, which comprises the following steps:
step 1, based on collected LiDAR point cloud data, linear features of the surfaces of a building and a structure are extracted through man-machine interaction, plane fitting and intersection, and the extracted linear features in a three-dimensional space are described by utilizing Plucker coordinates;
in general, a straight line direction vector can be utilizedA point which is passed by the straight line>To express a straight line in three-dimensional space. However, considering that there may be many points passing through a straight line, such expression methods are not unique in many cases, and especially when applied to registration of LiDAR point clouds, the complexity of registration models and convenience of model application are affected.
Step 1-1, establishing a corresponding relation between a unit quaternion and a rotation matrix;
quaternion is a mathematical concept found by Hamilton in 1843, similar to complex numbers. The quaternion represents a 4-dimensional space, which is represented by a real partq 0 And three imaginary partsq 1q 2q 3 The composition is as follows:
(1)
let the unit direction vector of a rotation axis passing through the origin in three-dimensional space beRotation angle around rotation axisθIs expressed as a quaternion:
(2)
when a rotational transformation in three-dimensional space is represented by a quaternion, the quaternion is represented as:
(3)
known three-dimensional space feature sampling pointCorresponding quaternion->Represented as,/>Is transformed into->The process of (1) is expressed as follows:
(4)
wherein,、/>、/>respectively represent sampling points +.>A kind of electronic devicexCoordinates of,yCoordinates and method for producing the samezCoordinates of->Representing quaternion +.>Corresponding to the expression:
(5)
according to the operation rule of the quaternion, the formula (4) is expressed as a matrix:
(6)
wherein,,/>
according to a rotation matrixRAnd unit quaternionIs used for the corresponding relation of the (a),Rthe expression is as follows:
(7)
wherein,Iis a unit matrix of 3*3 and is formed by a matrix,representing an antisymmetric matrix expressed in the form of
Step 1-2, expressing linear characteristics in a three-dimensional space by using Plucker coordinates, and carrying out regularization treatment on the linear characteristics Plucker coordinates:
For straight line features in three-dimensional spaceThe diversity problem of expression forms, the German well-known math Plucker J, proposes the concept of the Plucker straight line coordinates (also known as Plucker coordinates), which can be expressed as a straight line L in space using the Plucker coordinatesWherein->Direction vector representing straight line, +.>A Moment (LM) representing a straight Line, specifically, the Moment of a straight Line can be obtained by cross-multiplying a direction vector of a straight Line with any point thereof, that is: />. It is worth mentioning that the Plucker linear coordinates can be expressed in terms of even numbers, namely:wherein->Is a dual unit, satisfy->And->. Further, by definition, an orthogonal relationship is satisfied between the direction vector of the straight line and the moment of the straight line, that is: />
As shown in fig. 1: any two points in a given spaceAnd->According toThe direction vector and moment of the straight line are calculated by the following formula:
(8)
wherein,direction vector representing straight line, +.>Representing the moment of a straight line>Perpendicular to a plane containing the straight line and passing through the origin;
if two other different points are taken on the straight lineAnd->According to linear coordinate substitution, the method is respectively marked as:
(9)
wherein,and->Is a multiplication factor, and the two are usually not equal, and the direction vector of the straight line is obtained based on the formula (9) And moment->The method comprises the following steps of:
(10)
analysis shows that the linear direction vector and moment expressed by the formula (10) have the same scaling factor as that of the formula (8)In order to avoid inconsistent condition of the Plucker coordinates of the same straight line caused by non-uniform end point sampling, the regular processing is performed on the Plucker coordinates of the straight line, namely: all elements of the Plucker linear coordinates are divided by the modulus of the linear direction vector, and the result is called regularized Plucker linear coordinates. From the relationship between the formula (8) and the formula (10), it can be seen that: after regularization treatment, any two characteristic points on a straight line are taken, and the coordinates of the regularized Plucker straight lines are consistent. It is worth mentioning that the straight line in the three-dimensional space is expressed by using the regularized Plucker straight line coordinates, and the vector length corresponding to the moment of the straight line is ≡>Just equivalent to the vertical distance from the origin to the straight line, therefore, compared with the classical method, the method has the advantages of more definite geometric meaning, simpler form and more important, the uniqueness of the expression form provides convenience for constructing the LiDAR point cloud registration model based on the constraint of the straight line characteristics.
According to the relation between the vector and the unit quaternion, the normalized Plucker coordinate is expanded to 8 dimensions according to the following formula:
(11)
Wherein,,/>
and 2, realizing mathematical description and expression of a linear characteristic space similarity transformation process by using dual quaternions, wherein the method comprises the following steps of:
step 2-1 includes: dual quaternionIs formed by combining quaternion and dual number, and has the following forms:
(12)
wherein,and->Are quaternions, respectively called the real part and the dual part of the dual quaternion;
the dual quaternion and quaternion also have a similar representation:
(13)
wherein the dual vectorAnd dual angle->The expression is as follows:
(14)
wherein,a unit direction vector representing the rotation axis and translation; />Indicating the position through which the rotation shaft passes;dindicating>Distance of translation.
As shown in fig. 2, the spatial rigid body transformation process using dual quaternion representation can be expressed as follows: first, the original coordinate system is followedIs shifted by a distancedThe coordinate system is then followed by +.>And the direction vector is +.>Is rotated by an angle +.>In this way, a spatial rigid transformation of the features is completed.
It should be noted that when the dual quaternion is used to represent the rigid transformation operation in space, the following two conditions need to be satisfied:
(15)
straight line L by using dual quaternion 1 The similarity transformation in three-dimensional space is expressed as:
(16)
Wherein,、/>straight lines of the same name before and after the spatial similarity transformation are respectively represented, < >>Representing the corresponding dual quaternion of a spatial rigid body transformation (including rotation and translation transformations without considering scaling coefficients)>Is->Conjugated, dual quaternion->And conjugate->The specific expression form of (2) is as follows:
(17)
taking into consideration the scale difference between the two coordinate systems before and after transformation, decomposing the formula (16), and expressing the real part and the dual part of the Plucker linear coordinate to obtain:
(18)
and (3) making:then->The conjugate expression of (c) is as follows:
(19)
further expressed as:
(20)
will beSubstituting the above formula to obtain:
(21)
step 2-2, describing a space similarity transformation process of the linear characteristics by using vector algebra, comprising:
using the correspondence between the dual quaternion and the rotation matrix and translation vector in the spatial similarity transformation, equation (21) is expressed as:
(22)
wherein,、/>unit direction vectors respectively representing the same-name straight line characteristics before and after the space similarity transformation; />、/>Respectively representing moments of the same-name straight line characteristics before and after the space similarity transformation; />、/>、/>Respectively representing the scaling coefficient, the rotation matrix and the translation vector corresponding to the space similarity transformation.
And 3, describing errors of the linear features extracted by each adjacent LiDAR measuring station by using an EIV model by taking registration of point clouds between the adjacent two LiDAR measuring stations as a starting point, establishing a constraint space similarity transformation parameter solving model based on an overall least squares criterion, and solving the registration parameters of the point clouds between the adjacent two LiDAR measuring stations, wherein the method comprises the following steps:
Step 3-1, describing errors of straight line characteristics by using an EIV model:
considering the extraction error of the linear characteristic direction vector, describing the spatial similarity transformation process of the unit direction vector in the formula (22) based on the EIV model, and obtaining the following steps:
(23)
and (3) making:developing the formula (23) to obtain:
(24)
and (3) making:,/>,/>then formula (24) is further expressed as:
(25)
and (3) making:,/>formula (25) is rewritten as:
(26)
wherein,
step 3-2, solving the spatial similarity transformation parameters based on the constraint of the least square criterion:
and (3) rewriting a spatial similarity transformation expression of the linear feature moment shown in the formula (22) to obtain:
(27)
wherein,
considering the extraction error of the linear feature moment, describing the equation (27) based on the EIV model, and obtaining:
(28)
wherein,and->Moment of straight line characteristic>And->Error of->Direction vector representing straight line characteristics->Error of (2);
intermediate parameters of the orderLinearizing equation (28) to give:
(29)
wherein,representing intermediate parameters +.>At a given +.>、/>、/>Calculating results under the condition of initial values of three parameters;
intermediate parameters of the orderIntermediate parameter->Intermediate parameter->Intermediate parameter->Intermediate parameter->Then formula (29) is further expressed as:
(30)
wherein, 、/>、/>、/>Respectively represent the scaling coefficient +.>Translation vector->In the coordinate axisxyAndzcorrection of projection length on the projection; />Representation->Initial estimate of +.>Representation->Is determined by the method;
intermediate parameters of the orderIntermediate parameter->Intermediate parameter->Formula (30) is rewritten as:
(31)
wherein,
taking into account unit quaternionsThe following conditions need to be satisfied for the four elements in (a):
(32)
taylor expansion was performed on formula (32) and approximated to a first order term to obtain:
(33)
intermediate parameters of the orderIntermediate parametersThen formula (33) is further expressed as:
(34)
intermediate parameters of the order,/>,/>,/>,/>Then formulas (26) and (31) are collectively expressed as:
(35)
constructing an objective function of overall least square by using Lagrange multiplier method:
(36)
respectively to (36)、/>、/>、/>Taking the partial derivative and making it equal to 0 yields:
(37)/>
(38)
(39)
(40)
according to formula (37), obtainIs represented by the expression:
(41)
substituting formula (41) into formula (39) yields:
(42)
wherein the intermediate parameter
Combining formula (38), formula (40), and formula (42) yields:
(43)
further finishing of formula (43) gives:
(44)
based on the equation (44), an iterative format for solving the space similarity transformation parameters is established, and under the condition of given initial values of the space similarity transformation parameters, the correction of each parameter is calculated、/>、/>、/>、/>、/>、/>And->Correcting the initial values of the parameters until the requirements are met; the spatial similarity transformation parameters comprise a rotation matrix +. >Corresponding quaternion->Scaling factor->Translation vector->
In one embodiment of the present invention, the method further comprises: step 4, based on the step 1, extracting more than three pairs of homonymous straight line features from point clouds of two adjacent LiDAR measuring stations S1 and S2 respectively, and describing the extracted straight line features by utilizing Plucker coordinates; selecting an S1 measuring station as a reference measuring station and an S2 measuring station as measuring stations to be registered, and calculating to obtain point cloud registration parameters between two adjacent LiDAR measuring stations based on the steps 2 and 3; and (3) carrying out space similarity transformation processing on all point clouds in the S2 measuring station based on the calculated point cloud registration parameters, and realizing fusion of point cloud data between two adjacent LiDAR measuring stations.
And 5, carrying out precision assessment on the fusion result of the point cloud data between two adjacent LiDAR measuring stations: taking the linear features as constraint conditions for solving point cloud registration parameters between adjacent measuring stations, wherein the used observation values comprise direction vectors of the same-name linear features of a reference station and the stations to be registered and moments of the linear features, 6 error equations can be listed for each pair of the same-name features according to a formula (22), the number of parameters to be solved is 7, namely 7 parameters for describing space similarity transformation, and the accuracy of registration is evaluated by using the calculation results of the following expressions:
(45)
Wherein,representing the accuracy of registration, +.>And the number of the same-name straight line features extracted between adjacent LiDAR measuring stations is represented.
Experiment and analysis:
in one embodiment of the present invention, experimental data for simulation designed to verify the correctness of the algorithm is shown below, wherein the station to be registered is 5 straight line features extracted manually, and the reference station is a homonymic feature obtained after transforming the 5 straight line features in the station to be registered through a preset spatial similarity transformation parameter.
The measuring station is a reference station: number 1, starting pointx (m) is a number of the amino acids (A) to (B) 5.185950,y (m) is-3.755233,z (m) is 1.806912, endpointx (m) is a number of the amino acids (A) to (B) 5.132785,y (m) is-2.520610,z (m) is 1.909281;
number 2, starting pointx (m) is a number of the amino acids (A) to (B) 5.185950,y (m) is-3.755233,z (m) is 1.806912, endpointx (m) is a number of the amino acids (A) to (B) 3.987135,y (m) is-3.709747,z (m) is 0.635716;
number 3, starting pointx (m) is a number of the amino acids (A) to (B) 5.185950,y (m) is-3.755233,z (m) is 1.806912, endpointx (m) is a number of the amino acids (A) to (B) 6.007798,y (m) is-3.691120,z (m) is 1.460490;
number 4, starting pointx (m) is a number of the amino acids (A) to (B) 5.026282,y (m) is-1.900266,z (m) is 1.882385, endpointx (m) is a number of the amino acids (A) to (B) 4.306993,y (m) is-1.872974,z (m) is 1.179668;
number 5, starting pointx (m) is a number of the amino acids (A) to (B) 5.208934,y (m) is-1.895284, z (m) is 1.917162, endpointx (m) is a number of the amino acids (A) to (B) 6.236243,y (m) is-1.815142,z (m) is 1.484136;
the measuring station is a station to be registered: number 1, starting pointx (m) is 0.000000,y (m) is 0.000000,z (m) 1.000000, endpointx (m) is 3.000000,y (m) is 0.000000,z (m) is 1.000000;
number 2, starting pointx (m) is 0.000000,y (m) is 0.000000,z (m) 1.000000, endpointx (m) is 0.000000,y (m) is a number of the amino acids (A) to (B) 1.500000,z (m) is 0.000000;
number 3, starting pointx (m) is 0.000000,y (m) is 0.000000,z (m) 1.000000, endpointx (m) is 0.000000,y (m) is-1.500000,z (m) is 0.000000;
number 4, starting pointx (m) is 3.000000,y (m) is 0.000000,z (m) 1.000000, endpointx (m) is 3.000000,y (m) is a number of the amino acids (A) to (B) 1.500000,z (m) is 0.000000;
number 5, starting pointx (m) is 3.000000,y (m) is 0.000000,z (m) 1.000000, endpointx (m) is 3.000000,y (m) is-1.500000,z (m) is 0.000000.
Preset spatial similarity transformation parameters: rotation angle (°):αthe preset value 10.8135 is set to be,βthe preset value 4.7355 is set to be,γpreset value-92.4657; translation vector (m): t is t x Preset value 5.3000, t y Preset value-3.7000, t z Preset value 1.2000, scaling factorμPreset value 0.6200.
Based on the simulation data, the spatial similarity transformation parameters between the two groups of data are obtained based on the constraint of the overall least square criterion and the constraint solution of the least square criterion respectively, and the result is as follows:
Spatial similarity transformation parameters:
the least square method is adopted: rotation angle (°):αin the form of 10.8135,βin the form of 4.7355,γis-92.4657; translation vector (m): t is t x 5.3000, t y Is-3.7000, t z Scaling factor of 1.2000μ0.6200.
The overall least squares method is adopted: rotation angle (°):αin the form of 10.8135,βin the form of 4.7355,γis-92.4657; translation vector (m): t is t x 5.3000, t y Is-3.7000, t z Scaling factor of 1.2000μ0.6200.
According to the result of parameter solving, the space similarity transformation parameters calculated by the two algorithms are highly consistent with the preset parameters, and the degree of consistency between the calculated parameters and the preset parameters is higher when the parameters are constrained based on the overall least square criterion, so that the method provided by the invention is considered to be correct.
In order to further verify the anti-noise performance of the present invention, the following noise was added to the homonymous plane features shown in experimental data for simulation: the average value of 0.0m and the standard deviation of 0.05m are added to the two endpoints of all the straight line characteristics, and random noise conforming to Gauss normal distribution is added. In order to more objectively reflect the noise resistance of the algorithm, 1000 times of noise is added by using the scheme, and the average value of the spatial similarity transformation parameters between two groups of data is obtained based on the constraint of the total least square criterion and the constraint solution of the least square criterion.
Spatial similarity transformation parameters:
the least square method is adopted: rotation angle (°):αin the form of 10.7915,βin the form of 4.7046,γis-92.4784; translation vector (m): t is t x 5.2800, t y Is-3.6972, t z 1.1829, scaling factorμ0.6286.
The overall least squares method is adopted: rotation angle (°):αin the form of 10.7557,βin the form of 4.7381,γis-92.4733; translation vector (m): t is t x 5.2861, t y Is-3.6914, t z 1.1940, scaling factorμ0.6198.
Based on the experiment, the deviation between the homonymous features when no error disturbance is calculated by using the spatial similarity transformation parameters calculated after each error disturbance addition, and the errors in registration of the two algorithms are calculated respectively based on three modes of direction vector correction, moment correction and combination of the normal vector correction and the moment correction of the normal feature, and the results are shown in fig. 3, 4 and 5.
The statistics of the maximum value, the minimum value and the average value of the middle errors of the algorithm operation results shown in fig. 3, 4 and 5 are as follows:
the least square method is adopted, the maximum value of the error in the normal vector is 0.133424, the minimum value is 0.001918, and the average value is 0.050955; the medium error of moment (m) has a maximum value of 0.838296, a minimum value of 0.065142, and an average value of 0.340444; the maximum value of the normal vector + moment middle error is 0.848848, the minimum value is 0.065892, and the average value is 0.344427;
Adopting a total least square method, wherein the maximum value of the error in the normal vector is 0.113033, the minimum value is 0.003575, and the average value is 0.047945; the medium error of moment (m) has a maximum value of 0.667283, a minimum value of 0.040601, and an average value of 0.284824; the maximum value of the normal vector + moment middle error is 0.676691, the minimum value is 0.040758, and the average value is 0.288907;
analysis may yield: 1) The error phase difference in the registration of the two algorithms based on the direction vector deviation calculation of the homonymous features after registration is smaller, and the total least square method is slightly better than the least square method; 2) The error difference value in the registration of the two algorithms based on the moment deviation calculation of the homonymous features after registration is obvious, wherein the total least square method is obviously superior to the least square method; 3) The error difference value in the registration of the two algorithms is obtained by comprehensively calculating the normal vector deviation and the moment deviation based on the same-name features after registration is also obvious, and the result shows that the total least square method is also obviously superior to the least square method.
The same building elevation point cloud (shown in fig. 6) is acquired from two different view angles by using VZ-1000 series ground LiDAR equipment produced by the oridi Riegl company, and before registration, each measuring point cloud belongs to a respective independent coordinate system, so that deviation exists between homonymous features.
7 pairs of homonymous straight line features are respectively extracted from two adjacent measuring stations shown in fig. 6 based on the modes of man-machine interaction, plane fitting and intersection, and each extracted straight line feature is expressed by using two vertexes positioned on a straight line as shown below.
Reference station (m): line segment number 01, starting point coordinate x of-9.154, y of 9.583, z of 7.885, ending point coordinate x of-4.932, y of 9.416, z of 7.887;
line segment number 02, starting point coordinate x of-14.826, y of-1.021, z of 7.880, ending point coordinate x of-14.443, y of 9.207, z of 7.880;
line segment number 03, starting point coordinate x of 4.653, y of 8.474, z of 7.894, ending point coordinate x of 4.244, y of-1.769, z of 7.891;
line segment number 04, starting point coordinate x of-14.832, y of-1.159, z of 7.880, ending point coordinate x of 4.253, y of-1.907, z of 7.891;
line segment number 05, starting point coordinate x of-5.757, y of-1.516, z of 7.90, ending point coordinate x of-5.753, y of-1.544, z of 3.225;
segment number 06, starting point coordinate x of-5.975, y of-7.086, z of 7.902, ending point coordinate x of-5.969, y of-7.077, z of 2.994;
line segment number 07, starting point coordinate x of 4.668, y of 9.016, z of 2.816, ending point coordinate x of 4.675, y of 9.005, z of 7.847;
Segment number 08, starting point coordinate x of-14.830, y of-1.159, z of 7.841, ending point coordinate x of-14.824, y of-1.192, z of 2.765;
to-be-registered measuring station (m): line segment number 01, starting point coordinate x of-11.352, y of 5.786, z of 8.199, ending point coordinate x of 5.493, y of 6.842, z of 4.968;
line segment number 02, starting point coordinate x of-11.521, y of-5.174, z of 9.616, ending point coordinate x of-12.423, y of 5.135, z of 8.479;
line segment number 03, starting point coordinate x of 6.299, y of 6.354, z of 4.880, ending point coordinate x of 7.610, y of-8.815, z of 6.563;
line segment number 04, starting point coordinate x of-7.944, y of-4.932, z of 8.930, ending point coordinate x of-2.611, y of-4.580, z of 7.902;
line segment number 05, starting point coordinate x of-2.618, y of-4.578, z of 7.924, ending point coordinate x of-3.810, y of-5.405, z of 1.415;
segment number 06, starting point coordinate x is-2.160, y is-10.098, z is 8.472, ending point coordinate x is-3.012, y is-10.674, z is 3.830;
line segment number 07, starting point coordinate x of 5.333, y of 6.270, z of-0.147, ending point coordinate x of 6.257, y of 6.890, z of 4.842;
segment number 08, starting point coordinate x of-11.530, y of-5.155, z of 9.574, ending point coordinate x of-12.529, y of-5.872, z of 4.096;
Based on the extracted homonymous straight line characteristics, the following three schemes are designed to respectively solve and obtain similar transformation parameters between a station to be registered and a reference station: the scheme I is obtained by constraint calculation based on a least square criterion; the second scheme is that on the premise that the direction vectors of the same-name straight line features after registration are consistent, rotation transformation parameters between a station to be registered and a reference station are obtained through calculation based on constraint calculation of total least square, and on the basis, translation vectors and scaling coefficients between the station to be registered and the reference station are obtained through calculation; according to the third scheme, similar transformation parameters between the station to be registered and the reference station are calculated based on the method. And registering the point clouds of the adjacent LiDAR measuring stations based on the calculation result of the scheme III, wherein the result is shown in fig. 7.
Based on the registration parameters obtained by solving the three schemes, the registration errors of the algorithms are calculated by using the formula (45), and the results are shown below.
Scheme one: rotation angle (°):αis a product of the formula-7.1322,βin the form of 10.3581,γ7.1932; translation vector (m): t is t x Is-1.2072, t y 3.4714, t z 1.2064, scaling factorμThe content of the organic acid is 1.0001,σ0.00745853;
scheme II: rotation angle (°):αis a product of the formula-7.1394, βIn the form of 10.3690,γ7.2059; translation vector (m): t is t x Is-1.2060, t y 3.4711, t z 1.2067, scaling factorμThe content of the organic acid is 1.0001,σ0.00534790;
scheme III: rotation angle (°):αis a product of the formula-7.1653,βin the form of 10.3597,γ7.1960; translation vector (m): t is t x Is-1.2050, t y 3.4732, t z 1.2097, scaling factorμThe total number of the components was 0.9999,σ0.00258144.
Based on the above results, it can be seen that: 1) Whether scheme two or scheme three, the overall least squares method has a registration result that is generally better than the least squares method; 2) Rotation angle obtained by solving scheme one and scheme threeαIn order to find reasons, a scheme II is designed, namely, on the premise that the direction vectors of the same-name straight line features are consistent after registration, firstly, the rotation transformation parameters between the to-be-registered station and the reference measuring station are obtained based on constraint calculation of the total least squares, and then the translation vectors and the scaling coefficients between the to-be-registered station and the reference measuring station are obtained through calculation. In comparison, the rotation angle calculated by the second scheme and the first schemeαThe difference between the two is smaller, which indicates whether the error of the rotation angle in the space similarity transformation expression (the second expression in the expression (22)) of the moment is considered in the process of solving the registration parameters based on the constraint of the overall least square criterion or not, and the error of the rotation angle in the space similarity transformation expression (the second expression in the expression (22)) has larger final calculation result Influence; 3) Although the effect of the error of the rotation angle in the spatial similarity transformation expression of the moment (the second expression in expression (22)) on the final calculation result is considered in the registration parameter solving process based on the least square criterion constraint, the error expression capability of the Gauss-Markov model is limited, and the calculation result of the registration parameter is weaker than that of the overall least square method.
Analysis of results:
based on the simulation data, the operation test result of the method is that the difference between the calculated space similarity transformation parameter and the set parameter is very small, and the error is negligible in practical application, so that the operation result of the method provided by the invention is considered to be correct. Considering that errors inevitably exist in the real data sampling process, after certain random noise conforming to Gauss normal distribution characteristics is added to analog data, solving spatial similarity transformation parameters between two adjacent measuring stations based on constraint of an overall least square criterion and constraint of the least square criterion, and respectively calculating middle errors of registration results based on normal vector deviation and moment deviation of homonymous features and comprehensive calculation of the normal vector deviation and the moment deviation, wherein the results show that the overall least square method is superior to the least square method.
The LiDAR point cloud data acquired on site are utilized to respectively see the test results of the total least square method and the least square method, certain differences exist between registration parameters obtained by calculation of the total least square method and the least square method, and the reason is that the Gauss-Markov model has insufficient expression capability on the observed value error, so that the final result is influenced; furthermore, the overall least squares method is superior to the least squares method in terms of accuracy of registration.
In summary, the method provided by the invention has correct theory and expected operation result, and compared with the point cloud registration algorithm based on the least square criterion constraint, the method provided by the invention has excellent anti-noise performance, and can be popularized and used in practical application.

Claims (10)

1. The point cloud registration method based on the error variable model description under the linear characteristic constraint is characterized by comprising the following steps of:
step 1, extracting linear features of a building or a structure surface based on collected LiDAR point cloud data, and describing the linear features in an extracted three-dimensional space by utilizing Plucker coordinates;
step 2, realizing mathematical description and expression of a linear characteristic space similarity transformation process by using dual quaternions;
and 3, describing errors of the linear features extracted by each adjacent LiDAR measuring station by using an EIV model by taking registration and fusion of point cloud data between the adjacent two LiDAR measuring stations as a starting point of research, establishing a space similarity transformation parameter solving model based on constraint of a total least squares criterion, and realizing registration and fusion of the point cloud data between the adjacent two LiDAR measuring stations based on the calculated space similarity transformation parameters.
2. The method of claim 1, wherein step 1 comprises:
step 1-1, establishing a corresponding relation between a unit quaternion and a rotation matrix;
and step 1-2, describing the linear feature in the three-dimensional space by utilizing the Plucker coordinate, and carrying out regularization treatment on the Plucker coordinate of the linear feature.
3. The method of claim 2, wherein step 1-1 comprises: quaternionIs composed of a real partq 0 And three imaginary partsq 1q 2q 3 A four-element group is formed:
(1)
wherein,,/>、/>、/>respectively represent the three-dimensional space rectangular coordinate system and the coordinate systemxA shaft(s),yA shaft(s),zA unit vector having the same axial direction;
let the unit direction vector of a rotation axis passing through the origin in three-dimensional space beRotation angle around rotation axisθIs expressed as a quaternion:
(2)
wherein T represents a matrix transpose;、/>、/>respectively shown inxA shaft(s),yA shaft(s),zProjection length in the axial direction;
following the expression of equation (1), the quaternion representing the rotation transformation in three-dimensional space is expressed as:
(3)
wherein,corresponds to +.>,/>Corresponds to +.about.in formula (1)>
Known three-dimensional space feature sampling pointCorresponding quaternion->Represented as,/>Is transformed into->The process of (1) is expressed as follows:
(4)
Wherein,、/>、/>respectively represent sampling points +.>A kind of electronic devicexCoordinates of,yCoordinates ofAndzcoordinates of->Representing quaternion +.>Corresponding to the expression:
(5)
according to the operation rule of the quaternion, the formula (4) is expressed as a matrix:
(6)
wherein,,/>
according to a rotation matrixRAnd unit quaternionIs used for the corresponding relation of the (a),Rthe expression is as follows:
(7)
wherein,Iis a unit matrix of 3*3 and is formed by a matrix,representing an antisymmetric matrix expressed in the form of
4. A method according to claim 3The method is characterized in that the step 1-2 comprises the following steps: any two points in a given spaceAnd->,/>、/>、/>Is->Coordinates of->、/>、/>Is->The direction vector and moment of the straight line are calculated according to the following formula:
(8)
wherein,direction vector representing straight line, +.>Representing straight linesMoment of->Perpendicular to a plane containing the straight line and passing through the origin;
regularizing the Plucker coordinates of the straight line:
dividing all elements of the Plucker linear coordinate by a module of the linear direction vector, and obtaining a result called regularized Plucker linear coordinate;
according to the relation between the vector and the unit quaternion, the regularized Plucker coordinate is expanded to 8 dimensions according to the following formula:
(9)
wherein,for regularized Plucker coordinates extended to the expression after 8 dimensions, ++>Quaternion corresponding to direction vector of straight line, the specific expression is +. >,/>For regularizing the moment in Plucker coordinates +.>The corresponding quaternion is expressed in terms of +.>
5. The method of claim 4, wherein step 2 comprises:
step 2-1, describing a space similarity transformation process of the linear characteristics by using dual quaternions;
and 2-2, describing a space similarity transformation process of the linear characteristics by using vector algebra.
6. The method of claim 5, wherein step 2-1 comprises: dual quaternionThe method is obtained by combining quaternion and dual, and has the following forms:
(10)
wherein,and->Are quaternions, respectively called the real part and the dual part of the dual quaternion;
the dual quaternion and quaternion also have a similar representation:
(11)
wherein the dual vectorAnd dual angle->The expression is as follows:
(12)
wherein,a unit direction vector representing the rotation axis and translation; />Indicating the position through which the rotation shaft passes;dindicating>A translation distance;
the spatial rigid transformation process using dual quaternion representation is: first, the original coordinate system is followedIs shifted by a distancedThe coordinate system is then followed by +.>And the direction vector is +.>Is rotated by an angle +.>Thereby completing the spatial rigid transformation of the feature;
When representing rigid body transformation operations in space using dual quaternions, the following two conditions need to be satisfied:
(13)
the similar transformation process of the straight line L in the three-dimensional space is expressed as follows by using dual quaternions:
(14)
wherein,、/>mathematical expressions corresponding to straight lines L before and after spatial similarity transformation are respectively represented, and the mathematical expressions are +.>Representing the corresponding dual quaternion of the spatial rigid transformation, +.>Is->Conjugation of->The specific expression form of (2) is as follows:
(15)
considering the scale difference between the two coordinate systems before and after transformation, substituting the linear characteristic expression corresponding to the formula (9) and the dual quaternion expression corresponding to the formula (10) into the formula (14), and expressing the real part and the dual part of the Plucker coordinate to obtain the following components:
(16)
wherein,and->Unit direction vectors respectively representing the straight line characteristics before and after the spatial similarity transformation, < >>And->Moment representing the straight line characteristics before and after the spatial similarity transformation, respectively,/->Scaling factor representing the characteristics of the straight line before and after the spatial similarity transformation, < >>Corresponding dual quaternion->Real part of->Corresponding dual quaternion->Is (are) coupled with (are) are (are) added>And->Representing quaternion +.>And->A corresponding conjugated quaternion;
and (3) making:then->Conjugated expression of->The method comprises the following steps:
(17)
wherein,representing translation vector in spatial similarity transformation >Corresponding quaternion and satisfies the relation +.>;/>Representing quaternion +.>Conjugation of (2);
further expressed as:
(18)
will beSubstituting formula (18) to obtain:
(19)。
7. the method of claim 6, wherein step 2-2 comprises: using the correspondence between the dual quaternion and the rotation matrix and translation vector in the spatial similarity transformation, equation (19) is expressed as:
(20)
wherein,、/>unit direction vectors respectively representing the same-name straight line characteristics before and after the space similarity transformation; />、/>Respectively representing moments of the same-name straight line characteristics before and after the space similarity transformation; />、/>、/>Respectively representing the scaling coefficient, the rotation matrix and the translation vector corresponding to the space similarity transformation.
8. The method of claim 7, wherein step 3 comprises:
step 3-1, describing errors of straight line characteristics by using an EIV model:
considering the extraction error of the linear characteristic direction vector, describing the space similarity transformation process of the unit direction vector in the formula (20) based on the EIV model, and obtaining the following steps:
(21)
wherein,、/>direction vector representing straight line respectively->、/>Corresponding extraction errors;
intermediate parameters of the orderThe development of formula (21) yields:
(22)
wherein,representation matrix->Is the initial value of (2);
Intermediate parameters of the orderIntermediate parameter->Intermediate parametersThen formula (22) is further expressed as:
(23)
intermediate parameters of the orderIntermediate parameter->Formula (23) is rewritten as:
(24)
wherein the intermediate parameter,/>Representation->Is determined by the method;
step 3-2, solving the spatial similarity transformation parameters based on the constraint of the least square criterion:
and (3) rewriting a spatial similarity transformation expression of the linear feature moment shown in the formula (20) to obtain:
(25)
wherein the expression is,/>、/>、/>Respectively represent vector +.>In the coordinate axisxyAndzprojection length on;
considering the extraction error of the linear feature moment, describing the formula (25) based on the EIV model, and obtaining:
(26)
wherein,and->Moment of straight line characteristic>And->Error of->Direction vector representing straight line characteristicsError of (2);
intermediate parameters of the orderLinearizing equation (26) to obtain:
(27)
wherein,representing intermediate parameters +.>At a given +.>、/>、/>Calculating results under the condition of initial values of three parameters;
intermediate parameters of the orderIntermediate parameter->Intermediate parametersIntermediate parameter->Intermediate parameter->Then formula (27) is further expressed as:
(28)
wherein,、/>、/>、/>respectively represent the scaling coefficient +.>Translation vector->In the coordinate axisxyAndzcorrection of projection length on the projection; / >Representation->Initial estimate of +.>Representation->Is determined by the method;
intermediate parameters of the orderIntermediate parameter->Intermediate parameter->Formula (28) is rewritten as:
(29)
wherein,
taking into account unit quaternionsThe following conditions need to be satisfied for the four elements in (a):
(30)
taylor expansion was performed on formula (30) and approximated to a first term to obtain:
(31)
intermediate parameters of the orderIntermediate parametersThen formula (31) is further expressed as:
(32)
intermediate parameters of the order,/>,/>,/>,/>Unified table of formulae (24) and (29)The method comprises the following steps:
(33)
constructing an objective function of overall least square by using Lagrange multiplier method:
(34)
respectively (34) to、/>、/>、/>Taking the partial derivative and making it equal to 0 yields:
(35)
(36)
(37)
(38)
according to formula (35), obtainIs represented by the expression:
(39)
substituting formula (39) into formula (37) yields:
(40)
wherein the intermediate parameter
Combining formula (36), formula (38), and formula (40) yields:
(41)
further finishing of formula (41) gives:
(42)
based on the formula (42), establishing an iterative format for solving the space similarity transformation parameters, and calculating to obtain the correction of each parameter under the condition of giving the initial value of the space similarity transformation parameters、/>、/>、/>、/>、/>、/>And->Correcting the initial values of the parameters until the requirements are met; the spatial similarity transformation parameters comprise a rotation matrix +. >Corresponding quaternion->Scaling factor->Translation vector->
9. The method as recited in claim 8, further comprising:
step 4, based on the step 1, extracting more than three pairs of homonymous straight line features from point clouds of two adjacent LiDAR measuring stations S1 and S2 respectively, and describing the extracted straight line features by utilizing Plucker coordinates; selecting an S1 measuring station as a reference measuring station and an S2 measuring station as measuring stations to be registered, and calculating to obtain point cloud registration parameters between two adjacent LiDAR measuring stations based on the steps 2 and 3; performing space similarity transformation processing on all point clouds in the S2 measuring station based on the calculated point cloud registration parameters, and realizing fusion of point cloud data between two adjacent LiDAR measuring stations;
and 5, carrying out precision assessment on the fusion result of the point cloud data between two adjacent LiDAR measuring stations.
10. The method of claim 9, wherein step 5 comprises:
taking the linear characteristics as constraint conditions for solving point cloud registration parameters between adjacent measuring stations, wherein the used observation values comprise direction vectors of the same-name linear characteristics of a reference station and a station to be registered and moments of the linear characteristics, 6 error equations can be listed for each pair of the same-name characteristics according to a formula (20), the number of parameters to be solved is 7, namely 7 parameters describing space similarity transformation, and the accuracy of registration is evaluated by using the calculation results of the following expressions:
(43)
Wherein,representing the accuracy of registration, +.>And the number of the same-name straight line features extracted between adjacent LiDAR measuring stations is represented.
CN202410027444.8A 2024-01-09 2024-01-09 Point cloud registration method based on error variable model description under linear characteristic constraint Active CN117541634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410027444.8A CN117541634B (en) 2024-01-09 2024-01-09 Point cloud registration method based on error variable model description under linear characteristic constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410027444.8A CN117541634B (en) 2024-01-09 2024-01-09 Point cloud registration method based on error variable model description under linear characteristic constraint

Publications (2)

Publication Number Publication Date
CN117541634A true CN117541634A (en) 2024-02-09
CN117541634B CN117541634B (en) 2024-03-22

Family

ID=89794181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410027444.8A Active CN117541634B (en) 2024-01-09 2024-01-09 Point cloud registration method based on error variable model description under linear characteristic constraint

Country Status (1)

Country Link
CN (1) CN117541634B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6591131B1 (en) * 2019-02-07 2019-10-16 三菱電機株式会社 Structure measuring apparatus and structure measuring method
CN116385507A (en) * 2023-03-29 2023-07-04 华昕设计集团有限公司 Multi-source point cloud data registration method and system based on different scales

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6591131B1 (en) * 2019-02-07 2019-10-16 三菱電機株式会社 Structure measuring apparatus and structure measuring method
CN116385507A (en) * 2023-03-29 2023-07-04 华昕设计集团有限公司 Multi-source point cloud data registration method and system based on different scales

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王永波 等: "直线特征约束下利用Plücker坐标描述的LiDAR点云无初值配准方法", 武汉大学学报, 30 September 2018 (2018-09-30), pages 1376 - 1384 *

Also Published As

Publication number Publication date
CN117541634B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
Bartoli et al. Structure-from-motion using lines: Representation, triangulation, and bundle adjustment
Huang et al. On the complexity and consistency of UKF-based SLAM
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
US6614429B1 (en) System and method for determining structure and motion from two-dimensional images for multi-resolution object modeling
Sun et al. Camera self-calibration with lens distortion
CN109658444A (en) A kind of regular three-dimensional colour point clouds method for registering based on multi-modal feature
US9846974B2 (en) Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN106803094A (en) Threedimensional model shape similarity analysis method based on multi-feature fusion
CN110781903A (en) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
CN108921035A (en) Sub-pixed mapping localization method and system based on spatial attraction and pixel concentration class
CN112862683A (en) Adjacent image splicing method based on elastic registration and grid optimization
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN111650579A (en) InSAR mining area three-dimensional deformation estimation method and device for rock migration parameter adaptive acquisition and medium
CN117541634B (en) Point cloud registration method based on error variable model description under linear characteristic constraint
CN106595602B (en) Relative orientation method based on homonymous line feature
CN109816767A (en) A kind of three-dimensional building model house Story and door based map Method of Fuzzy Matching based on tangent space
CN112163309A (en) Method for quickly extracting space circle center of single plane circular image
CN114693755B (en) Non-rigid registration method and system for multimode image maximum moment and space consistency
CN114399547B (en) Monocular SLAM robust initialization method based on multiframe
CN111275747A (en) Virtual assembly method, device, equipment and medium
Xiang et al. Self-calibration for a non-central catadioptric camera with approximate epipolar geometry
Kume et al. Maximum likelihood estimation for the offset-normal shape distributions using EM
EP4111414A1 (en) Point cloud registration with error propagation
CN112017159A (en) Ground target reality simulation method in remote sensing scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant