CN111709981A - Registration method of laser point cloud and analog image with characteristic line fusion - Google Patents

Registration method of laser point cloud and analog image with characteristic line fusion Download PDF

Info

Publication number
CN111709981A
CN111709981A CN202010576614.XA CN202010576614A CN111709981A CN 111709981 A CN111709981 A CN 111709981A CN 202010576614 A CN202010576614 A CN 202010576614A CN 111709981 A CN111709981 A CN 111709981A
Authority
CN
China
Prior art keywords
point
point cloud
straight line
plane
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010576614.XA
Other languages
Chinese (zh)
Inventor
高小翎
高宏松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010576614.XA priority Critical patent/CN111709981A/en
Publication of CN111709981A publication Critical patent/CN111709981A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention provides a registration method of laser point cloud and analog image with fused characteristic lines, which aims at the problems that the registration method of the image and the point cloud based on the point cloud distance intensity image or point characteristic in the prior art has a plurality of defects of manual point selection error, point cloud data interpolation error, need of a plurality of observed values and the like, and the precision is very limited. The method for registering the laser point cloud and the simulated image based on the two-dimensional and three-dimensional fusion characteristic lines is provided, a direct conversion model between the image characteristic lines and the point cloud characteristic lines is established, the requirement on the initial value of external orientation elements of the image is not harsh, the robustness on noise is strong, the automatic matching of the two-dimensional and three-dimensional fusion characteristics is realized, the automation degree is improved, the higher real-time performance is achieved, the method has the advantages of no model error, no need of the initial value of the external orientation elements, only a small number of registration elements and the like, the calculation speed is high, and the registration requirement of the simulated image and the ground laser point cloud data is met.

Description

Registration method of laser point cloud and analog image with characteristic line fusion
Technical Field
The invention relates to a registration method of laser point cloud and a simulation image, in particular to a registration method of laser point cloud and a simulation image with characteristic line fusion, and belongs to the technical field of image registration methods.
Background
The photogrammetry mainly obtains the three-dimensional coordinates of the target through a plurality of images with certain overlapping degree and a small number of ground control points, has mature development, is widely applied to the fields of urban surveying and mapping, topographic survey, geographic national conditions census investigation and the like, has rich image texture and semantic information, but needs to obtain the three-dimensional information of the target through homonymy point matching and forward intersection, and is easily influenced by factors such as complexity of ground objects, shielding among the ground objects and the like, so that the high-precision three-dimensional information of the target is difficult to extract only by using the images. The GPS/IMU assisted photogrammetry technology still has the obvious problems of low positioning precision, insufficiently dense homonymous matching points, easily sheltered facade of urban buildings, easily influenced by topographic relief due to image distortion and the like, and simultaneously, because the urban buildings are densely distributed and the urban surveying task is heavy and complex, the actual requirement is difficult to meet only depending on photogrammetry, and the ground survey cannot be completely replaced. The geodetic three-dimensional laser scanning is an active sensor technology for rapidly acquiring three-dimensional spatial data, and actively emits laser pulse signals with certain frequency and wavelength to a target surface through a certain direction angle and a certain height angle, records the time difference of emitted and received signals and the intensity value of echo signals, and acquires the three-dimensional coordinates and the reflection characteristics of a measuring point. The geodetic three-dimensional laser scanning technology integrates spectral information such as three-dimensional coordinate data, reflection intensity, object color and the like, can truly reflect the form, structure and spectral characteristics of a target, and provides abundant data information for target identification and analysis.
Laser scanning measurement has many characteristics and advantages: the method has the advantages that firstly, the method is active and real-time, the detection of the space position and the spectrum information of the target is carried out by receiving the laser pulse echo signal emitted by the method, and the method is not limited by the conditions such as time, weather and the like; secondly, the precision of the coordinate data is ensured due to the fact that the target information is acquired by using a dot matrix or grid which is small in sampling interval and uniform in distribution based on quasi-parallel laser beams and high in density and high in precision; thirdly, digitalization and automation are realized, the processing software directly converts the laser pulse signal into a digital signal, and the full digital characteristic is realized; and fourthly, the system is non-contact and penetrable, a complete non-contact mode is adopted for scanning and measuring the target, the system has strong applicability, laser can penetrate through vegetation which is not dense enough to reach the surface of the target, three-dimensional data of the ground is obtained, and the problem that a simulation image is easily influenced by shielding is avoided. As the laser scanning measurement meets the use requirements and development trends of the surveying and mapping industry, obvious technical advantages and wide application prospects are shown in numerous engineering fields.
However, due to the limitations of the measurement technology itself or the characteristics of the target to be measured, obtaining three-dimensional geodetic information only by ground photogrammetry or geodetic three-dimensional laser scanning cannot meet the actual requirements, and the reasons mainly include: firstly, the precision of the image-based measurement technology is difficult to meet the requirement, and particularly, the automation degree is low, so that simple and rapid visualization is only met; and secondly, the pure point cloud data is only high-precision surface three-dimensional data, lacks texture information of a target surface, and cannot meet the requirements of users on environment perception and immersion experience in scene-type applications such as virtual reality and the like. The laser scanning data and the analog image have respective advantages for describing the target, if the urban modeling completely adopts the camera to carry out data acquisition, only the reconstruction of the concept and the abstract model can be completed, and the ground laser scanning is adopted to carry out data acquisition, so that the efficiency and the precision of the measurement operation and the three-dimensional modeling can be improved. Therefore, the method and the device for realizing the registration of the ground laser point cloud and the simulated image are a precondition for the fusion processing of the two data, and the registration problem of the two data is provided. Because the laser point cloud and the image data have strong complementarity, the integration of the laser point cloud and the image data has wide application in building extraction modeling, urban surveying and mapping and the like. However, due to the installation error of the optical camera and other reasons, the element of the external orientation of the simulated image has a large error, which results in that the simulated image and the laser point cloud cannot be well registered, so that before the two kinds of data are comprehensively applied, the two kinds of data need to be brought into a unified coordinate system, namely, the simulated image and the laser point cloud are registered.
For the registration problem of the laser point cloud and the analog image, the prior art has many limitations, and the following is further summarized as follows: firstly, the method strongly depends on point features, needs a large number of registration primitives and has high calculation complexity; the registration method for generating the point cloud intensity image is based on a two-dimensional registration model, and the registration algorithm for generating the point cloud by image matching is a three-dimensional registration model, so that the defect that the registration model is not strict exists; thirdly, errors are introduced in the processes of point cloud interpolation, projection and image matching; fourthly, manual point selection is needed, and the point characteristics in the point cloud are discrete and difficult to accurately select; fifthly, point characteristics or straight line characteristics need to be manually paired, so that the automation degree is low, and manual errors exist to influence the analysis precision; sixthly, the initial value of the external orientation element of the image is relied on, and the iteration times are more; and seventhly, the method is sensitive to noise and local convergence easily occurs.
In view of the fact that the registration of the laser point cloud and the simulation image in the prior art mainly depends on the characteristics of two-dimensional and three-dimensional fusion points, artificial point selection errors exist, and initial values of positions and postures are needed, the invention realizes the registration method of the laser point cloud and the simulation image based on two-dimensional and three-dimensional fusion characteristic lines under the condition of no initial values of position and posture parameters, judges the registration accuracy, and comprises the following three aspects: firstly, an image characteristic line extraction method, a judgment condition of short straight line segment combination processing and a combination method are adopted; secondly, a high-precision plane cutting and characteristic line extracting method for earth three-dimensional laser point cloud data; and thirdly, a function model based on the position and attitude estimation problem of the two-dimensional and three-dimensional fusion characteristic line, a position and attitude parameter analysis method and an accuracy judgment method under the condition of no initial value.
Disclosure of Invention
Aiming at the defects of the prior art, the registration method of the laser point cloud and the analog image with the fused characteristic lines provided by the invention has the advantages that the constraint effect of registration elements is strong, the required minimum registration element number is small, a strict registration model between two-dimensional and three-dimensional fusion registration elements is established, the requirement on the initial value of external orientation elements of the image is not strict, the calculated time cost is in direct proportion to the observed value number, the robustness on noise is stronger, the automatic matching of two-dimensional and three-dimensional fusion characteristics is realized, the automation degree is improved, the higher real-time performance is achieved, the method has universality and portability, the popularization and application potential is huge, and the wide market application prospect is possessed.
In order to achieve the technical effects, the technical scheme adopted by the invention is as follows:
the registration method of laser point cloud and analog image with fused characteristic line establishes a direct conversion model between the image characteristic line and the point cloud characteristic line, the two-dimensional and three-dimensional fusion registration method based on the characteristic line comprises a camera imaging model and a camera calibration and a position and posture estimation method of a plurality of groups of fused characteristic lines, the position and posture estimation method of the plurality of groups of fused characteristic lines comprises a characteristic line fusion registration function model, a quaternion array expression method of a rotation matrix, an analytic registration parameter and a registration precision judgment method;
extracting laser point cloud characteristic lines by adopting a characteristic line extraction method based on point cloud cutting plane intersection, managing point cloud by adopting KD-Tree, estimating normal vector and curvature by using a covariance analysis method, realizing point cloud plane cutting by adopting a region growing rough cutting method and a random sampling consistency plane fitting fine cutting method, and finally solving intersection of adjacent point cloud planes to extract high-precision straight line characteristics; extracting characteristic lines of the laser point cloud comprises point cloud preprocessing, cutting of a point cloud plane and intersection of the point cloud plane, wherein the point cloud preprocessing comprises point cloud thinning and density homogenization and point cloud denoising, the cutting of the point cloud plane comprises point cloud rough cutting and plane fitting based on region growing, and the intersection of the point cloud plane and the characteristic lines comprises estimation of adjacent straight line parameters, judgment of plane adjacent relation and solving of adjacent straight line segments;
extracting a characteristic line of the simulated image by adopting a characteristic line extraction algorithm based on the gradient direction of the image, searching a similar pixel region based on a gradient tension value and the gradient direction, determining a support region of a straight line segment by adopting a rectangular structure approximation method, extracting parameters of the straight line segment based on the center, the length and the width of a rectangular structure, and providing a short straight line segment merging judgment condition based on a straight line included angle and a distance and a merging processing method comprehensively considering the direction and the relative length of the straight line segment; the method for extracting the characteristic line of the simulated image comprises a straight line segment detection algorithm, straight line parameter estimation, short straight line segment merging processing and characteristic line screening.
The registration method of laser point cloud and simulated image with fused characteristic lines, further, the position and attitude estimation method of several groups of fused characteristic lines respectively extracts characteristic lines from the simulated image and geodetic three-dimensional laser point cloud, analyzes the position and attitude parameters of the simulated image under the scanner coordinate system to implement registration of the two, in the characteristic line fusion registration function model,
the imaging model of an ideal optical camera is:
qpi=C3×3qci
qci=B3×3qwi+G3×1
G3×1=[gxgygz]Tformula 2
The above formula is a functional model of a two-dimensional and three-dimensional fusion registration method based on point features, wherein q ispiCoordinates of the image point in the image coordinate system, C3x3Is an internal parameter matrix of the camera, corresponding to internal orientation elements in photogrammetry, which is obtained by calibrating the camera with optical distortion parameters, B3x3、G3x1The method comprises the following steps that a rotation matrix and a translation matrix from an object space coordinate system to a camera coordinate system are respectively, correspond to external orientation elements in photogrammetry and are parameters to be solved for two-dimensional and three-dimensional fusion registration;
the invention analyzes the registration parameter B by utilizing the geodetic three-dimensional laser point cloud and the characteristic line on the image3x3、G3x1Establishing a two-dimensional and three-dimensional fusion function model based on the characteristic line; the method for describing the straight line comprises the following steps:
firstly, the parameter of any straight line under the three-dimensional rectangular coordinate system is K ═ qi,uiWherein q isiIs the three-dimensional coordinate of any point on the straight line, viIs the unit direction vector of the line, KiThe coordinate of any point is
Figure BDA0002551488360000031
Wherein a is any real number;
secondly, the parameter of a certain straight line under the object space coordinate system is Kwi={qwi,uwi},qwi=[XiYiZi]T,uwi=[DiEiFi]TI.e. the equation of the line is DiX+EiY+FiZ+Hi=0;
Thirdly, the parameter of a certain straight line under the camera coordinate system is Kci={qci,uci},qci=[xiyifi]TI.e. the equation of the line is dix+eiy+jiz+hi=0;
For any group of homonymous two-dimensional and three-dimensional fusion straight lines Kci、KwiAnd, object space straight line KwiWarp B3x3、G3x1And converting the transformed image into a camera coordinate system to obtain a straight line Kci_={qci_,uci_Due to KwiAnd KciIs a straight line of the same name, according to the camera imaging principle, Kci_With a straight line K under the camera coordinate systemciAnd the projection center S of the camera is coplanar, and the plane is recorded as Ni,NiThe normal vector of (a) is recorded as MiThe coplanarity condition is a basic condition of two-dimensional and three-dimensional fusion registration based on the characteristic line, and specifically comprises two constraint conditions: one is KwiDirection vector u ofwiThrough the rotation matrix B3x3Direction vector u under camera coordinate system obtained after transformationci_And MiVertically; second, KwiAny point p onwiThrough B3x3、G3x1Point p in camera coordinate system obtained after transformationci_To plane NiIs 0, i.e. satisfies the plane NiThe equation of (c); the two constraints are expressed by the equation:
Figure BDA0002551488360000041
and because:
Figure BDA0002551488360000042
then the following results are obtained:
Figure BDA0002551488360000043
the above formula is a function model of a two-dimensional and three-dimensional fusion registration algorithm based on characteristic lines;
note plane NiNormal vector of (1)
Figure BDA0002551488360000044
Plane NiThe equation in the camera coordinate system is:
Mi1x+Mi2y+Mi3z+Mi40-0 formula 6
Plane NiNormal vector M ofiBased on camera principal distance f and Kci={qci,uciFind out, because,
Figure BDA0002551488360000045
Figure BDA0002551488360000051
and the projection center S of the camera is equal to [0,0 ]]TIn plane NiSubstituting into formula 6 to obtain Ni40, then plane NiThe equation of (a) is:
(yiji-eif)x+(dif-xiji)y+(xiei-diyi) z is 0 or 9
Combining formula 4 with formula 8 yields:
Figure BDA0002551488360000052
namely:
Figure BDA0002551488360000053
where K is the registration parameter to be found, B11…B33Is 9 elements of a rotation matrix B, which is an orthonormal matrix of units, B11…B33Are not independent of each other.
The invention discloses a registration method of a laser point cloud and an analog image fused with a characteristic line, and further, photogrammetry adopts three independent angle parameters to represent rotation transformation0+p1i+p2j+p3c, wherein p0、p1、p2、p3And i, j and c are real numbers and generalized imaginary units, and satisfy the following conditions:
i2=j2=c2=ijc=-1
ij ═ ji ═ c, jc ═ cj ═ i, Ci ═ iC ═ j formula 12
Then p is called a quaternion, if | p11, the method is a unit quaternion array which represents the large inclination angle rotation transformation in the three-dimensional space;
first, the unit quaternion array is converted to a rotation matrix:
let unit quaternion p be p0+p1i+p2j+p3c, obtaining a rotation matrix:
Figure BDA0002551488360000054
as long as the quaternion array is a unit quaternion array, the unit orthogonality of the rotation matrix can be ensured without supplementing an additional constraint equation;
second, the rotation matrix is converted to a unit quaternion array:
Figure BDA0002551488360000061
quaternion groups p and-p represent the same rotation matrix;
thirdly, the euler angle is converted into a unit quaternion array:
Figure BDA0002551488360000062
fourth, the unit quaternion array is converted to euler angles:
the euler angle formed by the quaternion array is characterized in that the quaternion array is expressed as a rotation matrix firstly, then the rotation matrix is converted into the euler angle, l takes a value from-90 degrees to +90 degrees, m and n take values from-180 degrees to +180 degrees, and the conversion formula from the rotation matrix to the euler angle is as follows:
Figure BDA0002551488360000063
the registration method of the laser point cloud and the analog image with the characteristic line fused further comprises the steps of analyzing registration parameters and setting a unit quaternion array p as p0+p1i+p2j+p3And c, expressing the rotation matrix by using a unit quaternion array as follows:
Figure BDA0002551488360000064
recording:
Figure BDA0002551488360000071
for s sets of line pairs, 2s constraint equations can be composed:
Figure BDA0002551488360000072
in the above formula dij、eijIs q iswi、uwiK is a coefficient matrix of 2s rows and 13 columns, K is a column vector consisting of 13 unknowns, | p | ═ 1, and the first 10 elements of K are not independent from each other;
firstly, elements in an unknown number vector K are used as independent parameters, an equation has 13 unknown numbers, the invention adopts a singular value decomposition method to solve a basic solution system, and the formula is as follows:
N=VHUT
Figure BDA0002551488360000073
in the formula 19, H is a singular value of N, and is also NTN and NNTThe square root of the non-zero eigenvalues, constituting a diagonal matrix of order 13, V, U being orthogonal matrices of order 2s and 13, respectively, each column of U being NTN orthogonal unit feature vector, each column of V is NNTOrthogonal unit feature vector of, NTThe characteristic corresponding to the minimum characteristic value of NThe vector is the least square solution of 13 parameters of K, elements on the diagonal line of H are arranged from large to small, and the last column of U corresponding to the minimum singular value is the 13 parameters of K;
for equation 18, the general solution is represented by its base solution series as follows:
Figure BDA0002551488360000074
ui=[λi1λi2… λi10]Tformula 20
m、ui、wiRespectively, the number of base solutions, the ith base solution and its corresponding coefficients, if equation 18 has S independent equations, then m is 13-S, since K is13x1The first 10 elements of (A) are rotation parameters, so K is calculated first13x1The expression is as follows:
Figure BDA0002551488360000075
wherein K13x1(1:10) is K13x1The first 10 elements of (a), ui(1:10) As the first 10 elements of each basic solution, uiSolving 10 coefficients when the calculation is completed; the first 10 elements in the K are not mutually independent, the first 10 elements are correlated, a constraint equation among 10 coefficients is established, the first 10 elements of the K are obtained after 10 coefficients are solved through a singular value decomposition method, namely a quaternion array representation method of a rotation matrix B, the translation vector G is solved by substituting the equation, and the common external orientation elements in the photogrammetry are obtained through conversion.
The invention discloses a registration method of a laser point cloud and a simulation image with fused characteristic lines, and further discloses a registration accuracy qualitative and quantitative evaluation method, which comprises the following steps:
firstly, a point cloud projection effect is achieved, laser point cloud is projected onto an image plane based on camera interior orientation elements and resolved image exterior orientation elements, and the registration effect of the laser point cloud and an image is qualitatively determined by observing the registration effect of point cloud outlines, obvious point, line and surface features and corresponding features on the image;
secondly, projecting the single image characteristic line to a plane based on the inside orientation element and the resolved outside orientation element of the image, calculating the average value of the vertical distances from two end points of the projection straight line segment of the point cloud characteristic line on the image to the corresponding image characteristic line, and calculating the included angle between the projection straight line segment and the corresponding image characteristic line;
and thirdly, based on the stereo measurement of two registration images, after the two images are registered with the ground laser point cloud, the three-dimensional coordinates of a target point are measured through front intersection and compared with the homonymous point in the initial laser point cloud, and the absolute measurement accuracy of the registration method of the invention is calculated by measuring the error in the point location.
The registration method of the laser point cloud and the analog image with the fused characteristic line further comprises the steps of selecting one or a plurality of simple surface patches as seed areas in point cloud rough cutting based on area growth, judging the similarity of point clouds in the neighborhood points and the seed areas on differential geometric attributes according to the local differential geometric attributes of the point clouds, and continuously supplementing the neighborhood points with certain similarity into the area to realize the continuous growth and expansion of the seed areas; in the process, parameters of the fitting plane or curved surface are continuously updated until no point meeting the normal vector included angle or distance critical value exists in the neighborhood range of the plane or curved surface, the cutting effect and the cutting precision mainly depend on the position selection, the similarity measurement and the growth sequence of the seed point or surface, and the method specifically comprises the following steps:
firstly, a local plane formed by fitting seed points and a point set in a certain range of a neighborhood of the seed points is used as a seed surface, a plurality of adjacent points of the seed surface to be tested are determined to determine whether the adjacent points can form a plane or not, if each parameter of a certain fitting plane can meet a preset critical value, the certain fitting plane is set as the seed surface, otherwise, other points are continuously used for searching the seed surface;
secondly, the similarity measure includes that the included angle between the normal vector of the candidate neighborhood point and the normal vector of the seed surface is smaller than a set critical value, and the distance between the candidate neighborhood point and a cut point or a cut plane is smaller than a set critical valueSetting a critical value; the altitude difference and local gradient information of the airborne laser point cloud can assist in cutting; the method specifically comprises the following steps: setting the point to be grown and the region in the range of neighborhood radius H of the growth surface boundary point, and setting the point in the radius as a candidate growth point, if the candidate growth point reaches the initial seed surface S0If the vertical distance of the point is less than H, the point is brought into a seed point set of the growth surface; if the included angle between the normal vector of the candidate growing point and the normal vector of the growing surface accords with the critical value of the included angle, absorbing the point as a growing point;
thirdly, the growing sequence adopts a gradual cutting mode from local to whole for point cloud data with complex distribution or irregular shape, and after determining seed points or surfaces, local to whole expanding cutting is realized by searching neighborhood points meeting the geometric critical value condition in the neighborhood.
The registration method of the laser point cloud and the analog image with the fused characteristic line further comprises the steps that the point cloud fine cutting of the plane fitting adopts a random sampling consistency algorithm to estimate parameters of a plane model, the point cloud is subjected to fine cutting based on the random sampling consistency algorithm plane fitting, and the process of extracting high-precision plane parameters comprises two aspects of content, namely plane model fitting and point cloud plane parameter estimation based on the random sampling consistency algorithm; the random sampling consistency algorithm carries out multiple random sampling on sample data, the minimum sample is randomly taken out each time to calculate initial model parameters, all data are divided into inner points within a certain error range of the model and outer points outside the error range according to the model parameters, most data are divided into the inner points, after multiple random sampling, the maximum number of outer point sets are determined within an allowed error range, and the optimal model parameters are obtained.
The method for estimating the model parameters of the plane by the random sampling consistency algorithm comprises the following three steps:
step one, randomly selecting s (s >3) points from sample data with M points, and estimating model parameters of an initial fitting plane;
secondly, counting a data point inner point set S which accords with the geometric distance critical value condition and the plane model in the sample according to the geometric distance critical value parameteriAnd make statistics ofThe ratio m of the inner points conforming to the model is SiThe geometric distance critical value comprises a normal included angle critical value and a distance critical value, wherein the normal included angle critical value is a coplanarity condition, and the distance critical value is a coplanarity condition;
thirdly, calculating the probability q that the model parameter is correct after c times of sampling as 1- (1-m)s)cJudging whether the Q value is larger than a preset probability critical value Q for the correctness of the model parameters; if so, based on the current SiCalculating final model parameters; if not, returning unmarked points and repeating the cycle until the optimal fitting parameters are obtained.
The registration method of the laser point cloud and the analog image with the fused characteristic line further comprises the following steps of judging the adjacency relation of two point cloud planes, wherein three critical value parameters are mainly adopted:
the first is the minimum distance between the point clouds of the two planes;
the minimum distance from a point in the plane to be analyzed to the reference plane and the minimum distance from a point in the reference plane to the plane to be analyzed;
thirdly, an included angle between the normal vectors of the two planes;
the adjacent line obtained by intersection operation directly based on the equations of two non-parallel adjacent planes is expressed by the coordinates of a certain point on the adjacent straight line and the normalized direction vector of the straight line; the invention accurately extracts the straight line segment characteristics intersected by the house roof surface, also needs to obtain the end point coordinates of two ends of the line characteristics, obtains two groups of collinear point sets by projecting all points of two adjacent surfaces onto the adjacent straight lines, and then solves the intersection of the two groups of point sets, namely the point sets on the adjacent lines of the two planes, thereby obtaining the coordinates of the two end points of the adjacent line segments.
The registration method of the laser point cloud and the analog image with characteristic line fusion further comprises the step of setting the function relation of x and y as d in the estimation of linear parameters0+d1x,d0Represents intercept, d1Representing the slope, wherein the relation has two undetermined parameters, and at least 2 points are needed for solving, and the method adopts a random sampling consistency algorithm method to estimate the parameters of a straight line; random sampling consistency algorithm method selects a random group in input dataThe sub-set achieves the goal, assuming the selected sub-set as a sample point, the parameters are estimated as follows:
step 1, calculating unknown parameters d by using the selected sample points0,d1Obtaining initial straight line parameters;
step 2, testing all data according to the initial model obtained in the step 1, and adding a point into the internal point set if the point is suitable for the estimation model;
step 3, if the number of points in the inner point set is enough, namely the number of points is larger than a critical value, the estimated initial model is considered to be reasonable, and the linear parameters are updated based on the updated inner point set;
step 4, solving optimal parameters by taking the fitting residual errors as evaluation criteria; the execution is repeated until a fixed number of times is reached, and the model parameters obtained each time are discarded with too few points in the sample point set or are approved better than the previous model.
The registration method of the laser point cloud and the analog image with the fused characteristic lines further comprises the steps that a merging method of line segment attributes such as the direction and the relative length of a line segment to be merged is adopted for merging the short line segment, the direction and the position of the line segment in which the merged line segment is located are calculated by taking the direction and the relative length of the line segment to be merged as weight parameters, and a good merging result is obtained; the method comprises the following two steps:
step 1, calculating the position and the direction of a straight line where the merged straight line segment is located; assuming that the point Q is a point on the merged straight line segment, firstly considering the influence of the lengths of the two straight line segments to be merged on the position of the point Q, and defining the point coordinates as follows:
Figure BDA0002551488360000101
wherein a, b, c and d are two end points of the straight line segments i and j respectively, and ki、kjIs the length of two straight line segments (a)x,ay) The coordinates of the point a and the coordinates of other 3 points are the same, and then the direction of two straight line segments to be merged is considered to be opposite to the direction of the straight line where the merged straight line segment is positionedThe influence of (2) on the direction of the straight line segment after merging is defined as follows:
Figure BDA0002551488360000102
wherein t represents the direction of the merged straight line, ki、tiRespectively representing the length and the direction of the line segment i;
step 2, determining two end points of the merged straight-line segment; after the position and the direction of the straight line where the merging straight line segment is located are determined, two end points of the straight line segment are determined, a, b, c and d are respectively projected onto a straight line k, and the straight line segment corresponding to the maximum range of the projection points is taken as a merging result.
Compared with the prior art, the invention has the following contributions and innovation points:
the method for registering the laser point cloud and the analog image by fusing the characteristic lines, provided by the invention, aims at the problems that the registration method of the image and the point cloud based on the point cloud distance intensity image or the point characteristic in the prior art has many defects of manual point selection error, point cloud data interpolation error, need of numerous observed values and the like, and the precision is very limited. The method for registering the laser point cloud and the simulated image based on the two-dimensional and three-dimensional fusion characteristic lines is provided, a direct conversion model between the image characteristic lines and the point cloud characteristic lines is established, the method has the advantages of no model error, no need of an external orientation element initial value, only need of a small number of registration elements and the like, the calculation speed is high, and the registration requirement of the simulated image and the ground laser point cloud data is met.
The point cloud characteristic line extraction adopts a characteristic line extraction method based on point cloud cutting plane intersection, KD-Tree management point cloud is adopted, normal vector and curvature are estimated by using a covariance analysis method, a region growing rough cutting method and a random sampling consistency plane fitting fine cutting method are adopted to realize point cloud plane cutting, and finally high-precision straight line characteristics are obtained by solving intersection of adjacent point cloud planes.
The method comprises the steps of extracting the characteristic line of the simulated image, searching the same type of pixel area based on the gradient tension value and the gradient direction by adopting a characteristic line extraction algorithm based on the gradient direction of the image, determining the support area of a straight line section by adopting a rectangular structure approximation method, and finally extracting the parameters of the straight line section based on the center, the length and the width of the rectangular structure. And providing a short straight line section combination judgment condition based on the straight line included angle and the distance and a combination processing method comprehensively considering the direction and the relative length of the straight line section. The method has the advantages of good noise immunity and robustness, high positioning precision of the characteristic line, small error of the registration parameter analyzed by the registration algorithm, high registration precision and capability of meeting the requirements of practical application.
And fourthly, according to the registration method of the laser point cloud and the analog image with the fused characteristic lines, the linear segment detection algorithm has the advantages of real-time performance, accuracy and robustness, high calculation efficiency, no need of setting parameters excessively, linear time complexity, capability of controlling false straight lines and the like. Feature lines are extracted from the aerial image based on a straight-line segment detection algorithm, parameters such as the length, the angle, the distance between endpoints, the vertical distance between line segments and the like are combined for optimization, and finally semi-automatic and high-precision registration of the aerial image of the urban area and the airborne laser radar is achieved.
The registration method of the laser point cloud and the simulated image with the fused characteristic lines has the advantages that the constraint effect of registration elements is strong, the required minimum registration element number is small, a strict registration model between two-dimensional and three-dimensional fusion registration elements is established, the requirement on the initial value of the external orientation element of the image is not strict, the calculated time cost is in direct proportion to the observed value number, the robustness on noise is strong, the automatic matching of two-dimensional and three-dimensional fusion characteristics is realized, the automation degree is improved, and the higher real-time performance is achieved. The method has very important value in the aspects of scientific research and application and social economic development requirements, has small limitation, universality and transportability, has huge popularization and application potential, and has wide market application prospect.
Drawings
FIG. 1 is a schematic flow chart of the registration method of laser point cloud and simulated image with feature line fusion according to the present invention.
Fig. 2 is a schematic diagram of a camera calibration photographing method of the invention.
FIG. 3 is a schematic diagram of extracting feature lines based on plane intersection of point cloud cutting.
FIG. 4 is a flow chart of the point cloud plane cutting of the region growing and RANSAC model fitting of the present invention.
FIG. 5 is a schematic diagram of the invention of region growing based point cloud segmentation.
FIG. 6 is a flow chart of the invention for extracting feature lines by intersecting point cloud planes.
Fig. 7 is a schematic view of the merging method of the present invention considering the direction and relative length of the straight line segment.
Detailed Description
The technical solution of the registration method of the laser point cloud with fused feature lines and the simulated image provided by the present invention is further described below with reference to the accompanying drawings, so that those skilled in the art can better understand the present invention and can implement the present invention.
The registration method of the laser point cloud and the simulated image with the fused characteristic lines, provided by the invention, has the advantages of no model error, no need of an external orientation element initial value, only need of a small amount of registration elements and the like by establishing a direct conversion model between the image characteristic lines and the point cloud characteristic lines, and is high in calculation speed, and the registration requirement of the simulated image and the ground laser point cloud data is met. The two-dimensional and three-dimensional fusion registration method based on the characteristic lines comprises a camera imaging model, camera calibration and a position and posture estimation method of a plurality of groups of fusion characteristic lines, and the position and posture estimation method of the plurality of groups of fusion characteristic lines comprises a characteristic line fusion registration function model, a quaternion array expression method of a rotation matrix, an analytic registration parameter and a registration precision judgment method, as shown in figure 1.
Extracting laser point cloud characteristic lines by adopting a characteristic line extraction method based on point cloud cutting plane intersection, managing point cloud by adopting KD-Tree, estimating normal vector and curvature by using a covariance analysis method, realizing point cloud plane cutting by adopting a region growing rough cutting method and a random sampling consistency plane fitting fine cutting method, and finally solving intersection of adjacent point cloud planes to extract high-precision straight line characteristics, wherein the characteristic lines have higher positioning precision and better noise immunity and robustness; the extraction of the laser point cloud characteristic line comprises point cloud preprocessing, point cloud plane cutting and intersection of the point cloud plane to extract the characteristic line, wherein the point cloud preprocessing comprises point cloud thinning and density homogenization and point cloud denoising, the point cloud plane cutting comprises point cloud rough cutting and plane fitting based on region growing, and the intersection of the point cloud plane to extract the characteristic line comprises estimation of adjacent straight line parameters, judgment of plane adjacent relation and solution of adjacent straight line segments.
Extracting the characteristic line of the simulated image by adopting a characteristic line extraction algorithm based on the gradient direction of the image, searching the similar pixel region based on the gradient tension value and the gradient direction, determining the support region of the straight-line segment by adopting a rectangular structure approximation method, and finally extracting the parameters of the straight-line segment based on the center, the length and the width of the rectangular structure. And providing a short straight line section combination judgment condition based on the straight line included angle and the distance and a combination processing method comprehensively considering the direction and the relative length of the straight line section. Experimental results prove that the method has better noise immunity and robustness, high positioning precision on the characteristic line, small error of the registration parameters analyzed by the registration algorithm, high registration precision and capability of meeting the requirements of practical application. The method for extracting the characteristic line of the simulated image comprises a straight line segment detection algorithm, straight line parameter estimation, short straight line segment merging processing and characteristic line screening.
Feature line-based two-dimensional and three-dimensional fusion registration method
Camera imaging model and camera calibration
The ground laser point cloud and the analog image are registered by using internal orientation elements of a camera, and in order to improve the registration precision, the analog image is subjected to optical distortion correction, a camera imaging model comprises a pinhole imaging model and an optical distortion model, the pinhole imaging model expresses the transformation relation between an object space three-dimensional point and an ideal image point coordinate, and the optical distortion model expresses the difference between the ideal image point coordinate and an actual image point coordinate.
The camera calibration analysis method comprises the following steps of calibrating and analyzing a space conversion relation among a camera coordinate system, an object coordinate system and an image coordinate system of a camera, wherein a single camera calibration parameter comprises the following steps: the first is internal parameter: 3 internal orientation elements, namely a principal distance and a principal point; secondly, distortion parameters: a radial distortion parameter and a tangential distortion parameter; and thirdly, external parameters: 6 external orientation elements, wherein the internal parameters of the solving camera are internal calibration, the external orientation parameters of the solving camera are external calibration, and the internal parameters and the distortion parameters are calibrated mainly.
The camera calibration adopts a camera calibration method of multi-chip space back intersection of a three-dimensional control field, the three-dimensional control field is composed of a plurality of three-dimensional control points with known space coordinates, the coordinates of the three-dimensional control points are accurately measured by angle intersection by using a theodolite or a total station and are obtained by adjustment, and the control points have depth and reliable positions in space distribution and do not deform.
The calibration method adopted by the invention comprises the following steps: selecting a plurality of positions in front of the control field, and acquiring three-dimensional control field images from different angle directions by using a single camera to ensure that the images shot by the same camera at different positions have enough overlapping degree, as shown in FIG. 2; the camera calibration method for multi-slice space back intersection of the three-dimensional control field takes the coordinates of image points as an observed value U according to a collinear condition equation to solve an inner orientation element J of a photoInner partAnd distortion addition parameter JadWhile solving the element J of exterior orientation of the photoOuter coverK is the corresponding mismatch value, and the formula is as follows:
U=DJouter cover+EJInner part+FJad-K formula 1
D, E, F is coefficient matrix, and the unknown parameters J are obtained by using least square principle and successive iteration1-outer、J2-outer、J3-outer、JInner partAnd JadThe initial value of the optical distortion parameter of the camera is set to be zero, and the initial values of the internal and external orientation elements are obtained by a three-dimensional direct linear transformation method.
According to the invention, if the calibration point is selected on the two images, the three-dimensional coordinate of the calibration point is obtained by carrying out forward intersection on the basis of the calibrated internal parameter and external parameter, and the three-dimensional coordinate is compared with the known coordinate of the calibration point to judge the calibration precision of the camera.
Position and attitude estimation method for (II) multiple groups of fusion characteristic lines
The position and attitude estimation method of the multiple groups of fusion characteristic lines extracts the characteristic lines from the simulated image and the geodetic three-dimensional laser point cloud respectively, analyzes the position and attitude parameters of the simulated image under the scanner coordinate system, and realizes the registration of the two.
1. Feature line fusion registration function model
The imaging model of an ideal optical camera is:
qpi=C3×3qci
qci=B3×3qwi+G3×1
G3×1=[gxgygz]Tformula 2
The above formula is a functional model of a two-dimensional and three-dimensional fusion registration method based on point features, wherein q ispiCoordinates of the image point in the image coordinate system, C3x3Is an internal parameter matrix of the camera, corresponding to internal orientation elements in photogrammetry, which is obtained by calibrating the camera with optical distortion parameters, B3x3、G3x1The three-dimensional coordinate system is a rotation matrix and a translation matrix from an object coordinate system to a camera coordinate system, corresponds to an external orientation element in photogrammetry, and is a parameter to be solved for two-dimensional and three-dimensional fusion registration.
The invention analyzes the registration parameter B by utilizing the geodetic three-dimensional laser point cloud and the characteristic line on the image3x3、G3x1And establishing a two-dimensional and three-dimensional fusion function model based on the characteristic line.
The method for describing the straight line comprises the following steps:
firstly, the parameter of any straight line under the three-dimensional rectangular coordinate system is K ═ qi,uiWherein q isiIs the three-dimensional coordinate of any point on the straight line, viIs the unit direction vector of the line, KiAll aboveThe coordinates of a point are
Figure BDA0002551488360000141
Wherein a is any real number;
secondly, the parameter of a certain straight line under the object space coordinate system is Kwi={qwi,uwi},qwi=[XiYiZi]T,uwi=[DiEiFi]TI.e. the equation of the line is DiX+EiY+FiZ+Hi=0;
Thirdly, the parameter of a certain straight line under the camera coordinate system is Kci={qci,uci},qci=[xiyifi]TI.e. the equation of the line is dix+eiy+jiz+hi=0。
For any group of homonymous two-dimensional and three-dimensional fusion straight lines Kci、KwiAnd, object space straight line KwiWarp B3x3、G3x1And converting the transformed image into a camera coordinate system to obtain a straight line Kci_={qci_,uci_Due to KwiAnd KciIs a straight line of the same name, according to the camera imaging principle, Kci_With a straight line K under the camera coordinate systemciAnd the projection center S of the camera is coplanar, and the plane is recorded as Ni,NiThe normal vector of (a) is recorded as MiThe coplanarity condition is a basic condition of two-dimensional and three-dimensional fusion registration based on the characteristic line, and specifically comprises two constraint conditions: one is KwiDirection vector u ofwiThrough the rotation matrix B3x3Direction vector u under camera coordinate system obtained after transformationciA and MiVertically; second, KwiAny point p onwiThrough B3x3、G3x1Point p in camera coordinate system obtained after transformationci_To plane NiIs 0, i.e. satisfies the plane NiThe equation of (c); the two constraints are expressed by the equation:
Figure BDA0002551488360000142
and because:
Figure BDA0002551488360000151
then the following results are obtained:
Figure BDA0002551488360000152
the above equation is a function model of a two-dimensional and three-dimensional fusion registration algorithm based on characteristic lines.
Note plane NiNormal vector of (1)
Figure BDA0002551488360000153
Plane NiThe equation in the camera coordinate system is:
Mi1x+Mi2y+Mi3z+Mi40-0 formula 6
Plane NiNormal vector M ofiBased on camera principal distance f and Kci={qci,uciGet out due to
Figure BDA0002551488360000154
Figure BDA0002551488360000155
And the projection center S of the camera is equal to [0,0 ]]TIn plane NiSubstituting into formula 6 to obtain Ni40, then plane NiThe equation of (a) is:
(yiji-eif)x+(dif-xiji)y+(xiei-diyi) z is 0 or 9
Combining formula 4 with formula 8 yields:
Figure BDA0002551488360000156
namely:
Figure BDA0002551488360000157
where K is the registration parameter to be found, B11…B33Is 9 elements of a rotation matrix B, which is an orthonormal matrix of units, B11…B33Are not independent of each other.
2. Quaternion array expression method of rotation matrix
The photogrammetry adopts three independent angle parameters to represent rotation transformation, the invention adopts a quaternion array to represent a rotation matrix, and p is equal to p0+p1i+p2j+p3c, wherein p0、p1、p2、p3And i, j and c are real numbers and generalized imaginary units, and satisfy the following conditions:
i2=j2=c2=ijc=-1
ij ═ ji ═ c, jc ═ cj ═ i, ci ═ ic ═ j formula 12
Then p is called a quaternion, if | p1And 1, the method is a unit quaternion group which represents the large-inclination-angle rotation transformation in the three-dimensional space.
First, the unit quaternion array is converted to a rotation matrix:
let unit quaternion p be p0+p1i+p2j+p3c, obtaining a rotation matrix:
Figure BDA0002551488360000161
as long as the quaternion array is a unit quaternion array, the unit orthogonality of the rotation matrix can be ensured, and an additional constraint equation does not need to be supplemented.
Second, the rotation matrix is converted to a unit quaternion array:
Figure BDA0002551488360000162
the quaternion groups p and-p represent the same rotation matrix.
Thirdly, the euler angle is converted into a unit quaternion array:
Figure BDA0002551488360000163
fourth, the unit quaternion array is converted to euler angles:
the euler angle formed by the quaternion array is characterized in that the quaternion array is expressed as a rotation matrix firstly, then the rotation matrix is converted into the euler angle, l takes a value from-90 degrees to +90 degrees, m and n take values from-180 degrees to +180 degrees, and the conversion formula from the rotation matrix to the euler angle is as follows:
Figure BDA0002551488360000171
3. resolving registration parameters
Let unit quaternion p be p0+p1i+p2j+p3And c, expressing the rotation matrix by using a unit quaternion array as follows:
Figure BDA0002551488360000172
recording:
Figure BDA0002551488360000173
for s sets of line pairs, 2s constraint equations can be composed:
Figure BDA0002551488360000174
in the above formula dij、eijIs q iswi、uwiK is a coefficient matrix of 2s rows and 13 columns, K is a column vector consisting of 13 unknowns, | p | ═ 1, and the first 10 elements of K are not independent of each other.
Firstly, elements in an unknown number vector K are used as independent parameters, an equation has 13 unknown numbers, the invention adopts a singular value decomposition method to solve a basic solution system, and the formula is as follows:
N=VHUT
Figure BDA0002551488360000181
in the formula 19, H is a singular value of N, and is also NTN and NNTThe square root of the non-zero eigenvalues, constituting a diagonal matrix of order 13, V, U being orthogonal matrices of order 2s and 13, respectively, each column of U being NTN orthogonal unit feature vector, each column of V is NNTOrthogonal unit feature vector of, NTThe eigenvector corresponding to the minimum eigenvalue of N is the least square solution of 13 parameters of K, the elements on the diagonal line of H are arranged from large to small, and the last column of U corresponding to the minimum singular value is the 13 parameters of K.
For equation 18, the general solution is represented by its base solution series as follows:
Figure BDA0002551488360000182
ui=[λi1λi2… λi10]Tformula 20
m、ui、wiRespectively, the number of base solutions, the ith base solution and its corresponding coefficients, if equation 18 has S independent equations, then m is 13-S, since K is13x1The first 10 elements of (A) are rotation parameters, so K is calculated first13x1The expression is as follows:
Figure BDA0002551488360000183
wherein K13x1(1:10) is K13x1The first 10 elements of (a), ui(1:10) As the first 10 elements of each basic solution, uiThe 10 coefficients have been found, and are solved for this. In KThe first 10 elements are not mutually independent and are correlated, a constraint equation among 10 coefficients is established, the first 10 elements of K, namely a quaternion array representation method of a rotation matrix B, are obtained after 10 coefficients are solved through a singular value decomposition method, the translation vector G is solved by substituting the equation, and the commonly used exterior orientation elements in photogrammetry are obtained through conversion.
4. Registration accuracy evaluation method
After the registration of the geodetic laser point cloud and the simulated image is completed and the exterior orientation element of the simulated image under the laser point cloud coordinate system is obtained, the invention adopts a qualitative and quantitative judgment method for the registration accuracy as follows:
the method comprises the steps of firstly, performing point cloud projection effect, projecting laser point cloud onto an image plane based on camera interior orientation elements and resolved image exterior orientation elements, and qualitatively judging the registration effect of the laser point cloud and an image by observing the registration effect of point cloud outlines, obvious point, line and surface features and corresponding features on the image.
Secondly, projecting the characteristic lines of the point clouds onto a plane based on the inside orientation elements and the resolved outside orientation elements of the image by the projection effect, the distance and the angle errors of the characteristic lines of the single image, and calculating the average value of the vertical distances from two end points of the projection straight line segments of the characteristic lines of the point clouds on the image to the characteristic lines of the corresponding image and the included angle between the projection straight line segments and the characteristic lines of the corresponding image.
And thirdly, based on the stereo measurement of two registration images, after the two images are registered with the ground laser point cloud, the three-dimensional coordinates of a target point are measured through front intersection and compared with the homonymous point in the initial laser point cloud, and the absolute measurement accuracy of the registration method of the invention is calculated by measuring the error in the point location.
The two-dimensional and three-dimensional fusion registration method based on the characteristic line has the characteristics and advantages that: the initial value of the exterior orientation element is not needed; the rotation matrix is expressed based on the quaternion array, so that many defects of expressing the rotation matrix by using the Euler angle are avoided; the large-angle rotation can be represented without linearization and iterative operation; the method has stronger robustness to noise and very high registration precision; the calculation time and the number of the observed values are in a linear relation, and the calculation efficiency is high.
Secondly, extracting laser point cloud characteristic lines
The invention extracts characteristic lines by adopting plane intersection cut based on point cloud, and the contour lines of main ground objects of the city are generally formed by intersecting adjacent planes, such as k shown by a solid line in figure 312、k23、k13However, based on the discreteness of the scanning point cloud data and the limitation of the scanning density, the scanning points may not be located on all the intersecting lines, and a straight line fitted by a ground object facade point cloud contour point set obtained directly based on a point cloud cutting algorithm is deviated to the interior of a building, as shown by a dotted line in fig. 3, and the outline of the ground object cannot be truly reflected. In order to extract accurate contour lines and other local feature lines of the ground objects, a point cloud cutting algorithm is firstly adopted to extract a plurality of local planes of the surface of the ground objects, such as a plane G in figure 31、G2、G3Then, the precise contour line, ridge line or local feature line is extracted by the way that the planes intersect each other pairwise, as shown by the broken line K in FIG. 312. The method has obvious advantages, can extract high-precision characteristic lines, and adopts the method to extract the characteristic lines in the point cloud.
Point cloud preprocessing
When the earth three-dimensional laser point cloud is collected, various random noises exist in the point cloud data due to the influence of instrument measurement errors and scanning environments. The invention mainly carries out preprocessing on point cloud data from two aspects, improves the data quality and improves the precision and efficiency of point cloud cutting and characteristic line extraction.
1. Point cloud thinning and density homogenization
According to the method, the density of the point cloud is homogenized based on a space grid method, the density of the point cloud in each area is enabled to be close to uniform, the global applicability of a critical value parameter of a subsequent cutting algorithm is enhanced, point cloud thinning is achieved, and the calculation efficiency is improved. The space grid method is characterized in that a regular grid structure is constructed in a three-dimensional space based on point cloud data, for each cubic grid, a point closest to the center of the cubic grid is reserved or the gravity center of a point set in the grid is used for approximating all points in the grid, point cloud sparse extraction and density consistency are achieved, key parameters of the method are the size of the cubic grid in X, Y, Z three directions, if the size set value is large, output point clouds are few, the subsequent point cloud cutting time can be shortened, and meanwhile, the characteristic extraction effect and accuracy can be adversely affected.
2. Point cloud denoising
The invention mainly uses two point cloud denoising methods: a Gaussian discrimination denoising method and a sphere filtering method; the Gaussian discrimination denoising method establishes a Gaussian distribution model for point cloud data, takes points with errors larger than a plurality of times of standard deviation as noise and deletes the noise, and the main parameters are as follows: the number N of the neighborhood search range and the standard deviation multiple N are determined, and when N is fixed, the larger N is, the fewer the number of points to be rejected is; when N is constant, the larger N is, the more the number of points to be removed is. The sphere filtering method is to draw a sphere based on a set radius by taking the current point as the center of the sphere, count the number of laser points in the sphere, and if the number is smaller than a set critical value, the point is considered as a free noise point and is deleted. The method comprises the following steps that main parameters comprise a sphere radius r and a critical value M of the number of laser points in a sphere, the critical value of the number of the laser points in the sphere is set based on the space density of point cloud, and when r is fixed, the larger M is, the more the number of points is eliminated; when M is fixed, the larger r is, the fewer the number of points to be removed is.
(II) cutting point cloud plane
Dense point clouds on the surface of a building, which are collected by a ground laser scanner, can reflect many details, but also collect a plurality of point clouds covering other ground objects, such as pedestrians, vehicles, roads, trees and the like, which have adverse effects on the extraction of the building characteristic line. Therefore, the building point cloud data and the point cloud of other ground objects need to be separated, so as to improve the extraction efficiency and the extraction precision of the surface features of the building. Data segmentation is one of the key techniques for data feature extraction.
The invention adopts a point cloud plane fitting method combining region growing and a random sampling consistency algorithm model, takes the normal vector and curvature of a seed point and the Euclidean distance from a neighborhood point to a normal plane as growth constraint conditions, extracts an initial sample point set conforming to the plane model by continuously expanding the seed plane, and then estimates high-precision plane parameters by adopting a steady parameter estimation method based on the random sampling consistency algorithm plane fitting, and the specific flow is as shown in figure 4:
1. rough cutting of point clouds based on region growing
Selecting one or a plurality of simple patches as a seed region, judging the similarity of point clouds in a neighborhood point and the seed region on the differential geometric attribute according to the local differential geometric attribute of the point clouds, and continuously supplementing the neighborhood point with certain similarity into the region to realize the continuous growth and expansion of the seed region; in the process, parameters of the fitting plane or curved surface are continuously updated until no point meeting the normal vector included angle or distance critical value exists in the neighborhood range of the plane or curved surface, as shown in fig. 5, the cutting effect and precision mainly depend on the position selection, similarity measurement and growth sequence of the seed point or surface, specifically:
firstly, a local plane formed by fitting seed points and a point set in a certain range of a neighborhood of the seed points is used as a seed surface, a plurality of adjacent points of the seed surface to be tested are determined to determine whether the adjacent points can form a plane or not, if each parameter of a certain fitting plane can meet a preset critical value, the certain fitting plane is set as the seed surface, otherwise, other points are continuously used for searching the seed surface;
secondly, similarity measurement comprises that an included angle between a normal vector of a candidate neighborhood point and a normal vector of the seed surface is smaller than a set critical value, and the distance between the candidate neighborhood point and a cut point or a cut plane is smaller than the set critical value; the altitude difference and local gradient information of the airborne laser point cloud can assist in cutting; the method specifically comprises the following steps: setting the point to be grown and the region in the range of neighborhood radius H of the growth surface boundary point, and setting the point in the radius as a candidate growth point, if the candidate growth point reaches the initial seed surface S0If the vertical distance of the point is less than H, the point is brought into a seed point set of the growth surface; if the included angle between the normal vector of the candidate growing point and the normal vector of the growing surface accords with the critical value of the included angle, absorbing the point as a growing point;
thirdly, the growing sequence adopts a gradual cutting mode from local to whole for point cloud data with complex distribution or irregular shape, and after determining seed points or surfaces, local to whole expanding cutting is realized by searching neighborhood points meeting the geometric critical value condition in the neighborhood.
2. Point cloud fine cutting of plane fitting
Under ideal conditions, the point cloud data with the form comparison rule can be directly solved for corresponding model parameters through model fitting, but due to the limitations of parameters of a point cloud denoising algorithm and a rough cutting algorithm, point cloud clusters obtained through rough cutting based on region growing still have wrong cutting points, the optimal model parameters need to be estimated from the point cloud data containing the wrong points, and precise cutting is realized.
The method adopts a random sampling consistency algorithm to estimate parameters of a plane model, performs fine cutting on the point cloud based on random sampling consistency algorithm plane fitting, and extracts high-precision plane parameters, wherein the process comprises two aspects of contents, namely plane model fitting and point cloud plane parameter estimation based on the random sampling consistency algorithm; the random sampling consistency algorithm carries out multiple random sampling on sample data, the minimum sample is randomly taken out each time to calculate initial model parameters, all data are divided into inner points within a certain error range of the model and outer points outside the error range according to the model parameters, most data are divided into the inner points, after multiple random sampling, the maximum number of outer point sets are determined within an allowed error range, and the optimal model parameters are obtained.
The method for estimating the model parameters of the plane by the random sampling consistency algorithm comprises the following three steps:
step one, randomly selecting s (s >3) points from sample data with M points, and estimating model parameters of an initial fitting plane;
secondly, counting a data point inner point set S which accords with the geometric distance critical value condition and the plane model in the sample according to the geometric distance critical value parameteriAnd counting the ratio m of the inner points conforming to the model as SiThe geometric distance critical value comprises a normal included angle critical value and a distance critical value, wherein the normal included angle critical value is a coplanarity condition, and the distance critical value is a coplanarity condition;
thirdly, calculating the probability q that the model parameter is correct after c times of sampling as 1- (1-m)s)cJudging whether the Q value is larger than a preset probability critical value Q for the correctness of the model parameters; if so, based on the current SiCalculating final model parameters; if not, returning unmarked points and repeating the cycle until the optimal fitting parameters are obtained.
Regarding the critical value: m is0The ratio of prior inner points in the data set is obtained, the accurate value of the ratio is unknown, and the estimation is carried out according to the data quality; in the sampling of randomly selecting s points once, the error probability of the analyzed model parameter is
Figure BDA0002551488360000211
Assuming that sampling is needed at least c times, the probability that a sample containing s points and having no external points is extracted from the three-dimensional point set is ensured; the invention takes the critical value Q of Q as 0.99, and then
Figure BDA0002551488360000212
And solving a corresponding critical value C of the sampling times C.
(III) extracting characteristic line by intersecting point cloud plane
After the point cloud plane cutting and parameter estimation are realized, the straight line segment features in the point cloud are extracted by adopting the process shown in fig. 6.
1. Estimating adjacent straight line parameters
If two planes C in three-dimensional space1、C2Non-parallel, they must intersect and their intersection is a straight line, let the equation for these two planes be:
Figure BDA0002551488360000221
wherein
Figure BDA0002551488360000222
Are normal vectors of the two planes respectively, the direction vector of the intersection line of the two planes is
Figure BDA0002551488360000223
Let the point set on the intersection of the two planes satisfy the following linear equation:
Figure BDA0002551488360000224
simultaneous two formula yields:
Figure BDA0002551488360000225
solving to obtain:
Figure BDA0002551488360000226
substituting equation 23, the equation of the intersecting straight line is obtained:
Figure BDA0002551488360000227
wherein d, e and g are coefficients, respectively.
2. Determining plane adjacency relation and solving adjacent straight line segment
The adjacent relation of two point cloud planes is judged by mainly adopting three critical value parameters:
the first is the minimum distance between the point clouds of the two planes;
the minimum distance from a point in the plane to be analyzed to the reference plane and the minimum distance from a point in the reference plane to the plane to be analyzed;
thirdly, an included angle between the normal vectors of the two planes;
the adjacent line obtained by intersection operation directly based on the equations of two non-parallel adjacent planes is expressed by the coordinates of a certain point on the adjacent straight line and the normalized direction vector of the straight line. The invention accurately extracts the straight line segment characteristics intersected by the house roof surface, also needs to obtain the end point coordinates of two ends of the line characteristics, obtains two groups of collinear point sets by projecting all points of two adjacent surfaces onto the adjacent straight lines, and then solves the intersection of the two groups of point sets, namely the point sets on the adjacent lines of the two planes, thereby obtaining the coordinates of the two end points of the adjacent line segments.
Thirdly, extracting the characteristic line of the simulation image
The analog image of the invention is a digital image acquired by a common digital camera, and a large number of characteristic lines exist in artificially constructed structures and main road ground objects.
Straight line segment detection algorithm
The method comprises the steps of carrying out edge extraction by utilizing a gradient direction to replace a gradient tension value, carrying out denoising treatment on an initial image by utilizing a Gaussian model, then calculating the gradient tension value and the gradient direction of each pixel, arranging all the pixels according to the sequence of the gradient tension values from large to small, starting from the pixel with the larger gradient tension value, dividing the pixels with the similarity of the gradient direction into pixel regions with the uniform gradient direction through iterative calculation, adopting a rectangular structure to approximate the pixel regions with the similar gradient direction, and finally extracting the corresponding straight-line segment characteristics of the regions based on the central line of the rectangular structure.
The main principle and steps of the straight-line segment detection algorithm are as follows:
estimating the gradient tension value and direction of a pixel; in order to reduce the influence of noise on the result, a Gaussian model is used for denoising the digital image before the gradient is calculated, the gradient of the digital image is obtained by calculating a derivative of a two-dimensional discrete function, and the differentiation in the x direction and the y direction is completed. In order to make the calculated value close to the true value as much as possible under the condition of random noise and reduce the adverse effect of the neighborhood pixels on the opening gradient value and the gradient direction, the invention calculates the opening gradient value and the gradient direction by differentiating 4 pixels in a 2 multiplied by 2 neighborhood around the pixel, and the mathematical expression of the first order differential of the discrete digital image is corresponding to the difference of two adjacent pixels;
searching areas with similar gradient directions; the digital image pixel gradient is a geometric attribute of the change size and direction of the pixel gray value, wherein the gradient size reflects the intensity of the change of the local pixel gray value of the image, specifically, the edge in the image has a sudden change of the pixel gray value, so that the pixel gradient value is larger, the change of the pixel gray value at the non-edge is basically consistent, and the pixel gradient value is smaller.
Firstly, storing gradient information of each pixel point in a matrix structure, then sequencing all pixels according to the sequence of gradient values of the pixels from large to small, taking the pixel with larger gradient tension as a seed point, searching 8 neighborhood pixels of the seed point through iterative calculation, searching pixel points with similarity in the gradient direction, classifying the pixel points and the seed point into a same type region and marking the same type region as traversed, calculating a gradient direction mean value of the pixel points in the current same type region after traversing all the pixel points in the 8 neighborhood of the seed point and supplementing the gradient similar pixel points to the current region, and continuously increasing the pixel points in the same type region and continuously updating the direction mean value along with the expansion of the search region.
Step three, extracting straight line segments based on rectangular structure approximation
After the search of the areas with similar gradient directions is completed, approximating each area by using a rectangular structure, and determining the position and the direction of a rectangle; specifically, for each region class, the coordinate of the center point of the rectangle is estimated by utilizing the weighted average of the gray value information of each pixel in the region class; the direction of the rectangle is the average value of the directions of the pixels in the region, then the size of the rectangle is adjusted under the condition that the coverage rate of the pixels belonging to the same region in the rectangle reaches 99%, the length and the width of the rectangle are calculated, the approximation of the rectangle is completed, and at the moment, the central line of the rectangle is the straight-line segment characteristic of the region.
(II) estimating straight line parameters
Let the functional relationship between x and y be d0+d1x,d0Represents intercept, d1The slope is represented, the relation has two undetermined parameters, at least 2 points are needed for solving, and the method adopts a random sampling consistency algorithm method to estimate the parameters of the straight line.
The random sampling consistency algorithm method achieves the goal by selecting a random set of subsets in the input data, assuming the selected subsets as sample points, and estimating the parameters according to the following method:
step 1, calculating unknown parameters by using the selected sample pointsNumber d0,d1Obtaining initial straight line parameters;
step 2, testing all data according to the initial model obtained in the step 1, and adding a point into the internal point set if the point is suitable for the estimation model;
step 3, if the number of points in the inner point set is enough, namely the number of points is larger than a critical value, the estimated initial model is considered to be reasonable, and the linear parameters are updated based on the updated inner point set;
step 4, solving optimal parameters by taking the fitting residual errors as evaluation criteria; the execution is repeated until a fixed number of times is reached, and the model parameters obtained each time are discarded with too few points in the sample point set or are approved better than the previous model.
(III) merging treatment of short straight line segments
The method adopts a merging method of integrating the line segment attributes such as the direction, the relative length and the like of the line segment to be merged, calculates the direction and the position of the line segment after merging by taking the direction and the relative length of the line segment to be merged as weight parameters, and obtains a better merging result; the method comprises the following two steps:
step 1, calculating the position and the direction of a straight line where the merged straight line segment is located; assuming that the point Q is a point on the merged straight line segment, firstly considering the influence of the lengths of the two straight line segments to be merged on the position of the point Q, and defining the point coordinates as follows:
Figure BDA0002551488360000241
wherein a, b, c and d are two end points of the straight line segments i and j respectively, and ki、kjIs the length of two straight line segments (a)x,ay) The coordinates of the point a and the coordinates of the other 3 points are the same, then the influence of the directions of the two straight line segments to be merged on the direction of the straight line where the merged straight line segment is located is considered, and the direction of the straight line where the merged straight line segment is located is defined as follows:
Figure BDA0002551488360000242
wherein t represents the direction of the merged straight line, ki、tiRespectively representing the length and direction of the line segment i.
Step 2, determining two end points of the merged straight-line segment; after the position and the direction of the straight line where the merging straight line segment is located are determined, two end points of the straight line segment are determined, as shown in fig. 7, a, b, c and d are respectively projected onto a straight line k, and the straight line segment (e and f corresponding to the straight line segment) corresponding to the maximum range of the projection point is taken as a merging result.
(IV) screening the characteristic line
In order to improve the resolution precision of the registration parameters, the image characteristic lines need to meet the requirements of proper length and uniform distribution on the image, and the quantity of the image characteristic lines needs to be slightly different from the quantity of the three-dimensional characteristic lines participating in registration. Therefore, after the linear features are extracted from the simulated image, the extracted linear features are screened. The specific method for screening the characteristic line comprises the following steps: uniformly dividing an image into a plurality of areas; counting the number and length of straight line segments in each region; setting different quantity and length condition critical values for each area in consideration of image texture and material difference in different areas, wherein the critical value conditions are relatively loose for the area with less quantity of characteristic lines, and the critical value conditions are strict for the area with more quantity of characteristic lines; and respectively eliminating overlong or overlong characteristic lines in each region according to set critical value conditions.

Claims (10)

1. The feature line fused laser point cloud and analog image registration method is characterized in that a direct transformation model between an image feature line and a point cloud feature line is established, a two-dimensional and three-dimensional fusion registration method based on the feature line comprises a camera imaging model and a camera calibration and position and attitude estimation methods of a plurality of groups of fusion feature lines, and the position and attitude estimation methods of the plurality of groups of fusion feature lines comprise a feature line fusion registration function model, a quaternion array expression method of a rotation matrix, an analytic registration parameter and a registration precision judgment method;
extracting laser point cloud characteristic lines by adopting a characteristic line extraction method based on point cloud cutting plane intersection, managing point cloud by adopting KD-Tree, estimating normal vector and curvature by using a covariance analysis method, realizing point cloud plane cutting by adopting a region growing rough cutting method and a random sampling consistency plane fitting fine cutting method, and finally solving intersection of adjacent point cloud planes to extract high-precision straight line characteristics; extracting characteristic lines of the laser point cloud comprises point cloud preprocessing, cutting of a point cloud plane and intersection of the point cloud plane, wherein the point cloud preprocessing comprises point cloud thinning and density homogenization and point cloud denoising, the cutting of the point cloud plane comprises point cloud rough cutting and plane fitting based on region growing, and the intersection of the point cloud plane and the characteristic lines comprises estimation of adjacent straight line parameters, judgment of plane adjacent relation and solving of adjacent straight line segments;
extracting a characteristic line of the simulated image by adopting a characteristic line extraction algorithm based on the gradient direction of the image, searching a similar pixel region based on a gradient tension value and the gradient direction, determining a support region of a straight line segment by adopting a rectangular structure approximation method, extracting parameters of the straight line segment based on the center, the length and the width of a rectangular structure, and providing a short straight line segment merging judgment condition based on a straight line included angle and a distance and a merging processing method comprehensively considering the direction and the relative length of the straight line segment; the method for extracting the characteristic line of the simulated image comprises a straight line segment detection algorithm, straight line parameter estimation, short straight line segment merging processing and characteristic line screening.
2. The method of claim 1, wherein the position and orientation estimation method for multiple sets of fused feature lines extracts feature lines from the simulated image and the ground three-dimensional laser point cloud respectively, analyzes the position and orientation parameters of the simulated image under the scanner coordinate system, realizes the registration of the two, and the feature line fusion in the registration function model,
the imaging model of an ideal optical camera is:
qpi=C3×3qci
qci=B3×3qwi+G3×1
G3×1=[gxgygz]Tformula 2
The above formula is a functional model of a two-dimensional and three-dimensional fusion registration method based on point features, wherein q ispiCoordinates of the image point in the image coordinate system, C3x3Is an internal parameter matrix of the camera, corresponding to internal orientation elements in photogrammetry, which is obtained by calibrating the camera with optical distortion parameters, B3x3、G3x1The method comprises the following steps that a rotation matrix and a translation matrix from an object space coordinate system to a camera coordinate system are respectively, correspond to external orientation elements in photogrammetry and are parameters to be solved for two-dimensional and three-dimensional fusion registration;
the invention analyzes the registration parameter B by utilizing the geodetic three-dimensional laser point cloud and the characteristic line on the image3x3、G3x1Establishing a two-dimensional and three-dimensional fusion function model based on the characteristic line; the method for describing the straight line comprises the following steps:
firstly, the parameter of any straight line under the three-dimensional rectangular coordinate system is K ═ qi,uiWherein q isiIs the three-dimensional coordinate of any point on the straight line, viIs the unit direction vector of the line, KiThe coordinate of any point is q'i=qi+auiWherein a is any real number;
secondly, the parameter of a certain straight line under the object space coordinate system is Kwi={qwi,uwi},qwi=[XiYiZi]T,uwi=[DiEiFi]TI.e. the equation of the line is DiX+EiY+FiZ+Hi=0;
Thirdly, the parameter of a certain straight line under the camera coordinate system is Kci={qci,uci},qci=[xiyifi]TI.e. the equation of the line is dix+eiy+jiz+hi=0;
For any group of homonymous two-dimensional and three-dimensional fusion straight lines Kci、KwiAnd, object space straight line KwiWarp B3x3、G3x1And converting the transformed image into a camera coordinate system to obtain a straight line Kci_={qci_,uci_Due to KwiAnd KciIs a straight line of the same name, according to the camera imaging principle, Kci_With a straight line K under the camera coordinate systemciAnd the projection center S of the camera is coplanar, and the plane is recorded as Ni,NiThe normal vector of (a) is recorded as MiThe coplanarity condition is a basic condition of two-dimensional and three-dimensional fusion registration based on the characteristic line, and specifically comprises two constraint conditions: one is KwiDirection vector u ofwiThrough the rotation matrix B3x3Direction vector u under camera coordinate system obtained after transformationci_And MiVertically; second, KwiAny point p onwiThrough B3x3、G3x1Point p in camera coordinate system obtained after transformationci_To plane NiIs 0, i.e. satisfies the plane NiThe equation of (c); the two constraints are expressed by the equation:
Figure FDA0002551488350000021
and because:
Figure FDA0002551488350000022
then the following results are obtained:
Figure FDA0002551488350000023
the above formula is a function model of a two-dimensional and three-dimensional fusion registration algorithm based on characteristic lines;
note plane NiNormal vector of (1)
Figure FDA0002551488350000024
Plane NiThe equation in the camera coordinate system is:
Mi1x+Mi2y+Mi3z+Mi40-0 formula 6
Plane NiNormal vector M ofiBased on phasePrincipal distance f and Kci={qci,uciFind out, because,
Figure FDA0002551488350000031
Figure FDA0002551488350000032
and the projection center S of the camera is equal to [0,0 ]]TIn plane NiSubstituting into formula 6 to obtain Ni40, then plane NiThe equation of (a) is:
(yiji-eif)x+(dif-xiji)y+(xiei-diyi) z is 0 or 9
Combining formula 4 with formula 8 yields:
Figure FDA0002551488350000033
namely:
Figure FDA0002551488350000034
where K is the registration parameter to be found, B11…B33Is 9 elements of a rotation matrix B, which is an orthonormal matrix of units, B11…B33Are not independent of each other.
3. The method of claim 1, wherein the photogrammetry uses three independent angle parameters to represent the rotation transformation, the invention uses a quaternion array to represent the rotation matrix, and p ═ p represents the rotation matrix0+p1i+p2j+p3c, wherein p0、p1、p2、p3And i, j and c are real numbers and generalized imaginary units, and satisfy the following conditions:
i2=j2=c2=ijc=-1
ij ═ ji ═ c, jc ═ cj ═ i, ci ═ ic ═ j formula 12
Then p is called a quaternion, if | p11, the method is a unit quaternion array which represents the large inclination angle rotation transformation in the three-dimensional space;
first, the unit quaternion array is converted to a rotation matrix:
let unit quaternion p be p0+p1i+p2j+p3c, obtaining a rotation matrix:
Figure FDA0002551488350000041
as long as the quaternion array is a unit quaternion array, the unit orthogonality of the rotation matrix can be ensured without supplementing an additional constraint equation;
second, the rotation matrix is converted to a unit quaternion array:
Figure FDA0002551488350000042
quaternion groups p and-p represent the same rotation matrix;
thirdly, the euler angle is converted into a unit quaternion array:
Figure FDA0002551488350000043
fourth, the unit quaternion array is converted to euler angles:
the euler angle formed by the quaternion array is characterized in that the quaternion array is expressed as a rotation matrix firstly, then the rotation matrix is converted into the euler angle, l takes a value from-90 degrees to +90 degrees, m and n take values from-180 degrees to +180 degrees, and the conversion formula from the rotation matrix to the euler angle is as follows:
Figure FDA0002551488350000044
4. the method of claim 1, wherein the registration parameters are analyzed by setting a unit quaternion array p-p0+p1i+p2j+p3And c, expressing the rotation matrix by using a unit quaternion array as follows:
Figure FDA0002551488350000051
recording:
Figure FDA0002551488350000052
for s sets of line pairs, 2s constraint equations can be composed:
Figure FDA0002551488350000053
in the above formula dij、eijIs q iswi、uwiK is a coefficient matrix of 2s rows and 13 columns, K is a column vector consisting of 13 unknowns, | p | ═ 1, and the first 10 elements of K are not independent from each other;
firstly, elements in an unknown number vector K are used as independent parameters, an equation has 13 unknown numbers, the invention adopts a singular value decomposition method to solve a basic solution system, and the formula is as follows:
N=VHUT
Figure FDA0002551488350000054
in the formula 19, H is a singular value of N, and is also NTN and NNTThe square root of the non-zero eigenvalues, constituting a diagonal matrix of order 13, V, U being orthogonal matrices of order 2s and 13, respectively, each column of U being NTN orthogonal unit feature vector, each column of V is NNTOrthogonal unit feature vector of, NTThe least square solution of 13 parameters with K as the characteristic vector corresponding to the minimum characteristic value of N is used for H diagonalThe elements are arranged from large to small, and the last column of U corresponding to the minimum singular value is 13 parameters of K;
for equation 18, the general solution is represented by its base solution series as follows:
Figure FDA0002551488350000055
ui=[λi1λi2… λi10]Tformula 20
m、ui、wiRespectively, the number of base solutions, the ith base solution and its corresponding coefficients, if equation 18 has S independent equations, then m is 13-S, since K is13x1The first 10 elements of (A) are rotation parameters, so K is calculated first13x1The expression is as follows:
Figure FDA0002551488350000061
wherein K13x1(1:10) is K13x1The first 10 elements of (a), ui(1:10) As the first 10 elements of each basic solution, uiSolving 10 coefficients when the calculation is completed; the first 10 elements in the K are not mutually independent, the first 10 elements are correlated, a constraint equation among 10 coefficients is established, the first 10 elements of the K are obtained after 10 coefficients are solved through a singular value decomposition method, namely a quaternion array representation method of a rotation matrix B, the translation vector G is solved by substituting the equation, and the common external orientation elements in the photogrammetry are obtained through conversion.
5. The registration method of the laser point cloud and the simulation image fused with the characteristic line according to claim 1, which is characterized in that the registration accuracy qualitative and quantitative judgment method adopted by the invention is as follows:
firstly, a point cloud projection effect is achieved, laser point cloud is projected onto an image plane based on camera interior orientation elements and resolved image exterior orientation elements, and the registration effect of the laser point cloud and an image is qualitatively determined by observing the registration effect of point cloud outlines, obvious point, line and surface features and corresponding features on the image;
secondly, projecting the single image characteristic line to a plane based on the inside orientation element and the resolved outside orientation element of the image, calculating the average value of the vertical distances from two end points of the projection straight line segment of the point cloud characteristic line on the image to the corresponding image characteristic line, and calculating the included angle between the projection straight line segment and the corresponding image characteristic line;
and thirdly, based on the stereo measurement of two registration images, after the two images are registered with the ground laser point cloud, the three-dimensional coordinates of a target point are measured through front intersection and compared with the homonymous point in the initial laser point cloud, and the absolute measurement accuracy of the registration method of the invention is calculated by measuring the error in the point location.
6. The registration method of the laser point cloud and the analog image fused with the characteristic line according to claim 1, characterized in that in the point cloud rough cutting based on the region growth, one or a plurality of simple surface patches are selected as seed regions, according to the local differential geometric attributes of the point cloud, the similarity of the point cloud in the neighborhood and the seed region on the differential geometric attributes is judged, the neighborhood with certain similarity is continuously supplemented into the region, and the continuous growth and expansion of the seed region are realized; in the process, parameters of the fitting plane or curved surface are continuously updated until no point meeting the normal vector included angle or distance critical value exists in the neighborhood range of the plane or curved surface, the cutting effect and the cutting precision mainly depend on the position selection, the similarity measurement and the growth sequence of the seed point or surface, and the method specifically comprises the following steps:
firstly, a local plane formed by fitting seed points and a point set in a certain range of a neighborhood of the seed points is used as a seed surface, a plurality of adjacent points of the seed surface to be tested are determined to determine whether the adjacent points can form a plane or not, if each parameter of a certain fitting plane can meet a preset critical value, the certain fitting plane is set as the seed surface, otherwise, other points are continuously used for searching the seed surface;
secondly, the similarity measure includes that the included angle between the normal vector of the candidate neighborhood point and the normal vector of the seed surface is smaller than the set critical valueThe distance from the candidate neighborhood point to the cut point or the cut plane is smaller than a set critical value; the altitude difference and local gradient information of the airborne laser point cloud can assist in cutting; the method specifically comprises the following steps: setting the point to be grown and the region in the range of neighborhood radius H of the growth surface boundary point, and setting the point in the radius as a candidate growth point, if the candidate growth point reaches the initial seed surface S0If the vertical distance of the point is less than H, the point is brought into a seed point set of the growth surface; if the included angle between the normal vector of the candidate growing point and the normal vector of the growing surface accords with the critical value of the included angle, absorbing the point as a growing point;
thirdly, the growing sequence adopts a gradual cutting mode from local to whole for point cloud data with complex distribution or irregular shape, and after determining seed points or surfaces, local to whole expanding cutting is realized by searching neighborhood points meeting the geometric critical value condition in the neighborhood.
7. The registration method of the laser point cloud and the analog image fused with the characteristic line according to claim 1, wherein the point cloud fine cutting of the plane fitting adopts a random sampling consistency algorithm to estimate parameters of a plane model, the point cloud is finely cut based on the random sampling consistency algorithm plane fitting, and the process of extracting high-precision plane parameters comprises two aspects of contents, namely plane model fitting and point cloud plane parameter estimation based on the random sampling consistency algorithm; the random sampling consistency algorithm carries out multiple random sampling on sample data, the minimum sample is randomly taken out each time to calculate initial model parameters, all data are divided into inner points within a certain error range of the model and outer points outside the error range according to the model parameters, most data are divided into the inner points, after multiple random sampling, the maximum number of outer point sets are determined within an allowed error range, and the optimal model parameters are obtained;
the method for estimating the model parameters of the plane by the random sampling consistency algorithm comprises the following three steps:
step one, randomly selecting s (s >3) points from sample data with M points, and estimating model parameters of an initial fitting plane;
in the second step, the first step is that,counting a data point inner point set S which accords with the geometric distance critical value condition and the plane model in the sample according to the geometric distance critical value parameteriAnd counting the ratio m of the inner points conforming to the model as SiThe geometric distance critical value comprises a normal included angle critical value and a distance critical value, wherein the normal included angle critical value is a coplanarity condition, and the distance critical value is a coplanarity condition;
thirdly, calculating the probability q that the model parameter is correct after c times of sampling as 1- (1-m)s)cJudging whether the Q value is larger than a preset probability critical value Q for the correctness of the model parameters; if so, based on the current SiCalculating final model parameters; if not, returning unmarked points and repeating the cycle until the optimal fitting parameters are obtained.
8. The method of claim 1, wherein the determination of the adjacency between two point cloud planes is mainly performed using three threshold parameters:
the first is the minimum distance between the point clouds of the two planes;
the minimum distance from a point in the plane to be analyzed to the reference plane and the minimum distance from a point in the reference plane to the plane to be analyzed;
thirdly, an included angle between the normal vectors of the two planes;
the adjacent line obtained by intersection operation directly based on the equations of two non-parallel adjacent planes is expressed by the coordinates of a certain point on the adjacent straight line and the normalized direction vector of the straight line; the invention accurately extracts the straight line segment characteristics intersected by the house roof surface, also needs to obtain the end point coordinates of two ends of the line characteristics, obtains two groups of collinear point sets by projecting all points of two adjacent surfaces onto the adjacent straight lines, and then solves the intersection of the two groups of point sets, namely the point sets on the adjacent lines of the two planes, thereby obtaining the coordinates of the two end points of the adjacent line segments.
9. The method of claim 1, wherein the linear parameter is estimated by setting the function relationship between x and y as y-d0+d1x,d0Represents intercept, d1Representing the slope, wherein the relation has two undetermined parameters, and at least 2 points are needed for solving, and the method adopts a random sampling consistency algorithm method to estimate the parameters of a straight line; the random sampling consistency algorithm method achieves the goal by selecting a random set of subsets in the input data, assuming the selected subsets as sample points, and estimating the parameters according to the following method:
step 1, calculating unknown parameters d by using the selected sample points0,d1Obtaining initial straight line parameters;
step 2, testing all data according to the initial model obtained in the step 1, and adding a point into the internal point set if the point is suitable for the estimation model;
step 3, if the number of points in the inner point set is enough, namely the number of points is larger than a critical value, the estimated initial model is considered to be reasonable, and the linear parameters are updated based on the updated inner point set;
step 4, solving optimal parameters by taking the fitting residual errors as evaluation criteria; the execution is repeated until a fixed number of times is reached, and the model parameters obtained each time are discarded with too few points in the sample point set or are approved better than the previous model.
10. The registration method of the laser point cloud and the simulation image fused with the characteristic line according to claim 1, wherein the merging processing of the short straight line section adopts a merging method of integrating line section attributes such as the direction and the relative length of a straight line section to be merged, and the direction and the position of the straight line section after merging are calculated by taking the direction and the relative length of the line section to be merged as weight parameters, so that a better merging result is obtained; the method comprises the following two steps:
step 1, calculating the position and the direction of a straight line where the merged straight line segment is located; assuming that the point Q is a point on the merged straight line segment, firstly considering the influence of the lengths of the two straight line segments to be merged on the position of the point Q, and defining the point coordinates as follows:
Figure FDA0002551488350000081
wherein a, b, c and d are two end points of the straight line segments i and j respectively, and ki、kjIs the length of two straight line segments (a)x,ay) The coordinates of the point a and the coordinates of the other 3 points are the same, then the influence of the directions of the two straight line segments to be merged on the direction of the straight line where the merged straight line segment is located is considered, and the direction of the straight line where the merged straight line segment is located is defined as follows:
Figure FDA0002551488350000091
wherein t represents the direction of the merged straight line, ki、tiRespectively representing the length and the direction of the line segment i;
step 2, determining two end points of the merged straight-line segment; after the position and the direction of the straight line where the merging straight line segment is located are determined, two end points of the straight line segment are determined, a, b, c and d are respectively projected onto a straight line k, and the straight line segment corresponding to the maximum range of the projection points is taken as a merging result.
CN202010576614.XA 2020-06-22 2020-06-22 Registration method of laser point cloud and analog image with characteristic line fusion Pending CN111709981A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010576614.XA CN111709981A (en) 2020-06-22 2020-06-22 Registration method of laser point cloud and analog image with characteristic line fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010576614.XA CN111709981A (en) 2020-06-22 2020-06-22 Registration method of laser point cloud and analog image with characteristic line fusion

Publications (1)

Publication Number Publication Date
CN111709981A true CN111709981A (en) 2020-09-25

Family

ID=72541943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010576614.XA Pending CN111709981A (en) 2020-06-22 2020-06-22 Registration method of laser point cloud and analog image with characteristic line fusion

Country Status (1)

Country Link
CN (1) CN111709981A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232248A (en) * 2020-10-22 2021-01-15 中国人民解放军战略支援部队信息工程大学 Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane
CN112509018A (en) * 2020-12-03 2021-03-16 湖南大学 Quaternion space optimized three-dimensional image registration method
CN112581457A (en) * 2020-12-23 2021-03-30 武汉理工大学 Pipeline inner surface detection method and device based on three-dimensional point cloud
CN112819883A (en) * 2021-01-28 2021-05-18 华中科技大学 Rule object detection and positioning method
CN112902878A (en) * 2021-01-21 2021-06-04 中国铁道科学研究院集团有限公司基础设施检测研究所 Method and device for adjusting laser plane of track geometry detection system
CN112926409A (en) * 2021-02-03 2021-06-08 自然资源部第一海洋研究所 Artificial auxiliary extraction method for aquatic animal frequency modulation type signal time-frequency characteristics
CN113033564A (en) * 2021-02-20 2021-06-25 意欧斯物流科技(上海)有限公司 High-precision and robustness 2D laser point cloud matching method
CN113255162A (en) * 2021-06-23 2021-08-13 南京信息工程大学 Vehicle-mounted laser point cloud automatic error correction method based on non-rigid probability model
CN113324473A (en) * 2021-04-30 2021-08-31 螳螂慧视科技有限公司 House measuring method and measuring equipment
CN113344992A (en) * 2021-05-31 2021-09-03 山东大学 Global point cloud registration method, system, storage medium and equipment
CN113687336A (en) * 2021-09-09 2021-11-23 北京斯年智驾科技有限公司 Radar calibration method and device, electronic equipment and medium
CN113763563A (en) * 2021-09-07 2021-12-07 岱悟智能科技(上海)有限公司 Three-dimensional point cloud geometric grid structure generation method based on plane recognition
CN113838141A (en) * 2021-09-02 2021-12-24 中南大学 External parameter calibration method and system for single line laser radar and visible light camera
CN113907490A (en) * 2021-10-28 2022-01-11 福建屹立智能化科技有限公司 Dynamic following glue spraying method and system
CN113947639A (en) * 2021-10-27 2022-01-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar-point cloud line characteristics
CN114187588A (en) * 2021-12-08 2022-03-15 贝壳技术有限公司 Data processing method, data processing device, storage medium and computer program product
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115409880A (en) * 2022-08-31 2022-11-29 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium
CN115423835A (en) * 2022-11-02 2022-12-02 中汽创智科技有限公司 Rod-shaped object point cloud data processing method and device, electronic equipment and storage medium
CN115496865A (en) * 2022-11-18 2022-12-20 南京智程信息科技有限公司 Similar model searching method and device and storage medium
CN116071353A (en) * 2023-03-06 2023-05-05 成都盛锴科技有限公司 Bolt assembly detection method and system
CN116385471A (en) * 2023-06-02 2023-07-04 中科微至科技股份有限公司 Laser contour line extraction method based on directional region growth
CN116399241A (en) * 2023-06-07 2023-07-07 武汉工程大学 Patch type inductance geometric parameter measurement method and system
CN116433756A (en) * 2023-06-15 2023-07-14 浪潮智慧科技有限公司 Surface object space analysis method, device and medium of monocular camera
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium
CN116883469A (en) * 2023-07-20 2023-10-13 中国矿业大学 Point cloud registration method based on EIV model description under plane feature constraint
CN117173227A (en) * 2023-11-01 2023-12-05 法奥意威(苏州)机器人系统有限公司 Point cloud registration method and device based on plane fitting and electronic equipment
CN117315183A (en) * 2023-11-30 2023-12-29 四川鼎鸿智电装备科技有限公司 Method for constructing three-dimensional map and analyzing operation based on laser radar
CN112819883B (en) * 2021-01-28 2024-04-26 华中科技大学 Rule object detection and positioning method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019242174A1 (en) * 2018-06-21 2019-12-26 华南理工大学 Method for automatically detecting building structure and generating 3d model based on laser radar
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡玉雷: "基于线特征的地面三维激光扫描点云与光学影像的配准方法研究", 《武汉大学机构知识库》, 31 December 2018 (2018-12-31), pages 1 - 118 *

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232248A (en) * 2020-10-22 2021-01-15 中国人民解放军战略支援部队信息工程大学 Method and device for extracting plane features of multi-line LiDAR point cloud data
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane
CN112509018A (en) * 2020-12-03 2021-03-16 湖南大学 Quaternion space optimized three-dimensional image registration method
CN112509018B (en) * 2020-12-03 2023-02-17 湖南大学 Quaternion space optimized three-dimensional image registration method
CN112581457B (en) * 2020-12-23 2023-12-12 武汉理工大学 Pipeline inner surface detection method and device based on three-dimensional point cloud
CN112581457A (en) * 2020-12-23 2021-03-30 武汉理工大学 Pipeline inner surface detection method and device based on three-dimensional point cloud
CN112902878A (en) * 2021-01-21 2021-06-04 中国铁道科学研究院集团有限公司基础设施检测研究所 Method and device for adjusting laser plane of track geometry detection system
CN112819883A (en) * 2021-01-28 2021-05-18 华中科技大学 Rule object detection and positioning method
CN112819883B (en) * 2021-01-28 2024-04-26 华中科技大学 Rule object detection and positioning method
CN112926409A (en) * 2021-02-03 2021-06-08 自然资源部第一海洋研究所 Artificial auxiliary extraction method for aquatic animal frequency modulation type signal time-frequency characteristics
CN112926409B (en) * 2021-02-03 2022-09-02 自然资源部第一海洋研究所 Artificial auxiliary extraction method for aquatic animal frequency modulation type signal time-frequency characteristics
CN113033564A (en) * 2021-02-20 2021-06-25 意欧斯物流科技(上海)有限公司 High-precision and robustness 2D laser point cloud matching method
CN113033564B (en) * 2021-02-20 2022-10-14 意欧斯物流科技(上海)有限公司 High-precision and robust 2D laser point cloud matching method
CN113324473B (en) * 2021-04-30 2023-09-15 螳螂慧视科技有限公司 House measuring method and measuring equipment
CN113324473A (en) * 2021-04-30 2021-08-31 螳螂慧视科技有限公司 House measuring method and measuring equipment
CN113344992A (en) * 2021-05-31 2021-09-03 山东大学 Global point cloud registration method, system, storage medium and equipment
CN113344992B (en) * 2021-05-31 2022-06-28 山东大学 Global point cloud registration method, system, storage medium and equipment
CN113255162A (en) * 2021-06-23 2021-08-13 南京信息工程大学 Vehicle-mounted laser point cloud automatic error correction method based on non-rigid probability model
CN113255162B (en) * 2021-06-23 2023-05-05 南京信息工程大学 Vehicle-mounted laser point cloud automatic error correction method based on non-rigid probability model
CN113838141A (en) * 2021-09-02 2021-12-24 中南大学 External parameter calibration method and system for single line laser radar and visible light camera
CN113763563A (en) * 2021-09-07 2021-12-07 岱悟智能科技(上海)有限公司 Three-dimensional point cloud geometric grid structure generation method based on plane recognition
CN113687336A (en) * 2021-09-09 2021-11-23 北京斯年智驾科技有限公司 Radar calibration method and device, electronic equipment and medium
CN113947639A (en) * 2021-10-27 2022-01-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar-point cloud line characteristics
CN113947639B (en) * 2021-10-27 2023-08-18 北京斯年智驾科技有限公司 Self-adaptive online estimation calibration system and method based on multi-radar point cloud line characteristics
CN113907490A (en) * 2021-10-28 2022-01-11 福建屹立智能化科技有限公司 Dynamic following glue spraying method and system
CN113907490B (en) * 2021-10-28 2022-07-08 福建屹立智能化科技有限公司 Dynamic following glue spraying method and system
CN114187588B (en) * 2021-12-08 2023-01-24 贝壳技术有限公司 Data processing method, device and storage medium
CN114187588A (en) * 2021-12-08 2022-03-15 贝壳技术有限公司 Data processing method, data processing device, storage medium and computer program product
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115100258B (en) * 2022-08-29 2023-02-07 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115409880A (en) * 2022-08-31 2022-11-29 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium
CN115409880B (en) * 2022-08-31 2024-03-22 深圳前海瑞集科技有限公司 Workpiece data registration method and device, electronic equipment and storage medium
CN115423835A (en) * 2022-11-02 2022-12-02 中汽创智科技有限公司 Rod-shaped object point cloud data processing method and device, electronic equipment and storage medium
CN115496865A (en) * 2022-11-18 2022-12-20 南京智程信息科技有限公司 Similar model searching method and device and storage medium
CN115496865B (en) * 2022-11-18 2023-03-14 南京智程信息科技有限公司 Similar model searching method, device and storage medium
CN116071353A (en) * 2023-03-06 2023-05-05 成都盛锴科技有限公司 Bolt assembly detection method and system
CN116071353B (en) * 2023-03-06 2023-09-05 成都盛锴科技有限公司 Bolt assembly detection method and system
CN116385471B (en) * 2023-06-02 2023-09-01 中科微至科技股份有限公司 Laser contour line extraction method based on directional region growth
CN116385471A (en) * 2023-06-02 2023-07-04 中科微至科技股份有限公司 Laser contour line extraction method based on directional region growth
CN116399241B (en) * 2023-06-07 2023-08-15 武汉工程大学 Patch type inductance geometric parameter measurement method and system
CN116399241A (en) * 2023-06-07 2023-07-07 武汉工程大学 Patch type inductance geometric parameter measurement method and system
CN116433756B (en) * 2023-06-15 2023-08-18 浪潮智慧科技有限公司 Surface object space analysis method, device and medium of monocular camera
CN116433756A (en) * 2023-06-15 2023-07-14 浪潮智慧科技有限公司 Surface object space analysis method, device and medium of monocular camera
CN116523984B (en) * 2023-07-05 2023-09-26 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium
CN116523984A (en) * 2023-07-05 2023-08-01 矽瞻科技(成都)有限公司 3D point cloud positioning and registering method, device and medium
CN116883469B (en) * 2023-07-20 2024-01-19 中国矿业大学 Point cloud registration method based on EIV model description under plane feature constraint
CN116883469A (en) * 2023-07-20 2023-10-13 中国矿业大学 Point cloud registration method based on EIV model description under plane feature constraint
CN117173227A (en) * 2023-11-01 2023-12-05 法奥意威(苏州)机器人系统有限公司 Point cloud registration method and device based on plane fitting and electronic equipment
CN117173227B (en) * 2023-11-01 2024-01-26 法奥意威(苏州)机器人系统有限公司 Point cloud registration method and device based on plane fitting and electronic equipment
CN117315183A (en) * 2023-11-30 2023-12-29 四川鼎鸿智电装备科技有限公司 Method for constructing three-dimensional map and analyzing operation based on laser radar
CN117315183B (en) * 2023-11-30 2024-02-23 四川鼎鸿智电装备科技有限公司 Method for constructing three-dimensional map and analyzing operation based on laser radar

Similar Documents

Publication Publication Date Title
CN111709981A (en) Registration method of laser point cloud and analog image with characteristic line fusion
CN111598823B (en) Multisource mobile measurement point cloud data space-ground integration method and storage medium
Kong et al. Automatic identification and characterization of discontinuities in rock masses from 3D point clouds
CA2387578C (en) Method for determination of stand attributes and a computer program to perform the method
CN111553245A (en) Vegetation classification method based on machine learning algorithm and multi-source remote sensing data fusion
CN111898688B (en) Airborne LiDAR data tree classification method based on three-dimensional deep learning
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
CN114998536A (en) Model generation method and device based on novel basic mapping and storage medium
Toschi et al. Combining airborne oblique camera and LiDAR sensors: Investigation and new perspectives
JP2003344048A (en) System for processing forest information
Su et al. Extracting wood point cloud of individual trees based on geometric features
CN112446844B (en) Point cloud feature extraction and registration fusion method
Hu et al. Research on a single-tree point cloud segmentation method based on UAV tilt photography and deep learning algorithm
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
Dutta et al. Characterizing vegetation canopy structure using airborne remote sensing data
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
Sun et al. Large-scale building height estimation from single VHR SAR image using fully convolutional network and GIS building footprints
Yin et al. Individual tree parameters estimation for chinese fir (cunninghamia lanceolate (lamb.) hook) plantations of south china using UAV Oblique Photography: Possibilities and Challenges
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
Li et al. New methodologies for precise building boundary extraction from LiDAR data and high resolution image
CN112669363B (en) Method for measuring three-dimensional green space of urban green space
CN112906719A (en) Standing tree factor measuring method based on consumption-level depth camera
Gujski et al. Machine learning clustering for point clouds optimisation via feature analysis in Cultural Heritage
Itakura et al. Voxel-based leaf area estimation from three-dimensional plant images
CN116758234A (en) Mountain terrain modeling method based on multipoint cloud data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination