CN112308889B - Point cloud registration method and storage medium by utilizing rectangle and oblateness information - Google Patents

Point cloud registration method and storage medium by utilizing rectangle and oblateness information Download PDF

Info

Publication number
CN112308889B
CN112308889B CN202011146501.2A CN202011146501A CN112308889B CN 112308889 B CN112308889 B CN 112308889B CN 202011146501 A CN202011146501 A CN 202011146501A CN 112308889 B CN112308889 B CN 112308889B
Authority
CN
China
Prior art keywords
point cloud
points
point
frame
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011146501.2A
Other languages
Chinese (zh)
Other versions
CN112308889A (en
Inventor
史文中
陈彭鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKPU filed Critical Shenzhen Research Institute HKPU
Priority to CN202011146501.2A priority Critical patent/CN112308889B/en
Publication of CN112308889A publication Critical patent/CN112308889A/en
Application granted granted Critical
Publication of CN112308889B publication Critical patent/CN112308889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a point cloud registration method and a storage medium by utilizing rectangle and oblateness information, wherein the method comprises the following steps: acquiring first frame point cloud data, and clustering the first frame point cloud data to obtain point cloud segments; acquiring a flat rate value of the point cloud segment; performing rectangle fitting on the point cloud segment to obtain a rectangular structure, and acquiring four vertex coordinates of the rectangular structure; and acquiring second frame point cloud data, and registering the first frame point cloud data and the second frame point cloud data according to the four vertex coordinates of the rectangular structure and the oblateness value of the point cloud segment. According to the method, the first frame of point cloud data is segmented by using a clustering method, so that the first frame of point cloud data can be increased according to a certain sequence, and then the flatness information of the segmented point cloud segments is acquired to match the two frames of point cloud data, so that the structure of the point cloud segments is accurately expressed by the flatness information, and the stability and reliability of point cloud registration in scene transformation are improved.

Description

Point cloud registration method and storage medium by utilizing rectangle and oblateness information
Technical Field
The invention relates to the field of point cloud data, in particular to a point cloud registration method and a storage medium by utilizing rectangle and oblateness information.
Background
Point cloud registration is a fundamental and important topic in three-dimensional computer vision. The method is widely applied to three-dimensional reconstruction, instant positioning and mapping, automatic driving and the like. However, there are many obstacles in practical use. The scholars have done a lot of work on obstacles such as point cloud sparse change, occlusion and partial overlapping, but research on scene transformation is still rare. The existing point cloud registration method for two frames of point cloud data in scene transformation is low in stability and reliability, and a method capable of stably and accurately registering the two frames of point cloud data in the scene transformation is lacked.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a point cloud registration method and a storage medium using rectangle and flat rate information, aiming at solving the problem that the prior art lacks a method for performing stable and accurate registration on two frames of point cloud data in scene transformation.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a point cloud registration method using rectangle and ellipticity information, where the method includes:
acquiring first frame point cloud data, and clustering the first frame point cloud data to obtain point cloud segments;
acquiring a flat rate value of the point cloud segment;
performing rectangle fitting on the point cloud segment to obtain a rectangular structure, and acquiring four vertex coordinates of the rectangular structure;
and acquiring second frame point cloud data, and registering the first frame point cloud data and the second frame point cloud data according to the four vertex coordinates of the rectangular structure and the oblateness value of the point cloud segment.
In one embodiment, the obtaining the first frame of point cloud data and clustering the first frame of point cloud data to obtain the point cloud segment includes:
acquiring first frame point cloud data, and taking a preset radius as a sphere in the first frame point cloud data to obtain a test sphere;
obtaining points in the test sphere to obtain neighborhood points, and taking the neighborhood points as the sphere center and the preset radius as the sphere to obtain a neighborhood sphere;
acquiring the number of points in the neighborhood sphere, the oblateness value of the point set in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and clustering the first frame of point cloud data according to the oblateness value of the point set and the projection distance of the number of the points to obtain a point cloud segment.
In an embodiment, the obtaining the number of points in the neighborhood sphere, the oblateness value of the point set in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and clustering the first frame of point cloud data according to the oblateness value of the point set and the projection distance of the number of the points to obtain the point cloud segment includes:
acquiring the number of points and the coordinates of the points in the neighborhood sphere, and acquiring the oblateness value of the point set in the neighborhood sphere according to the number of the points and the coordinates of the points in the neighborhood sphere;
acquiring the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame point cloud data;
respectively comparing the number of points in the neighborhood sphere, the projection distance, and the oblateness value of the point set in the neighborhood sphere with a point number threshold, a projection distance threshold, and a oblateness value threshold of the point set;
and when the number of the points in the neighborhood sphere, the projection distance and the oblateness value of the point set in the neighborhood sphere are respectively in the point number threshold, the projection distance threshold and the oblateness value threshold of the point set, clustering the neighborhood points and the test points to obtain point cloud segments.
In one embodiment, the obtaining the number of points and the coordinates of the points in the neighborhood sphere, and obtaining the ellipticity value of the point set in the neighborhood sphere according to the number of points and the coordinates of the points in the neighborhood sphere includes:
acquiring the number of points and the coordinates of the points in the neighborhood sphere, and acquiring the arithmetic mean coordinates in the neighborhood sphere according to the number of the points and the coordinates of the points in the neighborhood sphere;
obtaining a first scattering matrix corresponding to the neighborhood sphere according to the number of the points in the neighborhood sphere, the coordinates of the points and the arithmetic mean coordinate;
and acquiring a first eigenvalue and a third eigenvalue of the first scatter matrix, and obtaining the ellipticity value of the point set in the neighborhood sphere according to the first eigenvalue and the third eigenvalue of the first scatter matrix.
In one embodiment, the obtaining the ellipticity value of the point cloud segment includes:
acquiring the number of points and the coordinates of the points in the point cloud segment, and acquiring arithmetic mean coordinates according to the number of the points and the coordinates of the points in the point cloud segment;
obtaining a second scatter matrix corresponding to the point cloud segment according to the number of the points in the point cloud segment, the coordinates of the points and the arithmetic mean value coordinate;
and acquiring a first eigenvalue and a third eigenvalue of the second scatter matrix, and obtaining the flatness value of the point cloud segment according to the first eigenvalue and the third eigenvalue of the second scatter matrix.
In one embodiment, the performing a rectangular fitting on the point cloud segment to obtain a rectangular structure, and obtaining coordinate data of coordinates of four vertices of the rectangular structure includes:
performing rectangle fitting on the point cloud segment to obtain a rectangular structure;
acquiring a second eigenvalue of the second dispersion matrix;
and obtaining coordinate data of four vertexes of the rectangular structure according to the coordinates of the points in the point cloud segment, the arithmetic mean value coordinates, the first characteristic value and the second characteristic value.
In one embodiment, the acquiring the second frame of point cloud data, and the registering the first frame of point cloud data and the second frame of point cloud data according to the coordinate data of the four vertices of the rectangular structure and the ellipticity value of the point cloud segment includes:
acquiring second frame point cloud data and six-degree-of-freedom transformation quantity in an iteration process;
obtaining coordinate data of a transformation point under the coordinate of the first frame of point cloud data generated based on a point under the coordinate system of the second frame of point cloud data according to the six-degree-of-freedom variation, and determining a point cloud segment corresponding to the transformation point;
obtaining a target pose transformation matrix according to the flatness value of the point cloud segment, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and the coordinate data of the transformation point;
and registering the second frame of point cloud data and the first frame of point cloud data according to the target pose transformation matrix.
In one embodiment, the obtaining, according to the six-degree-of-freedom variation, coordinate data of a transformation point in coordinates of the first frame point cloud data generated based on a point in a coordinate system of the second frame point cloud data, and determining a point cloud segment corresponding to the transformation point includes:
obtaining a pose transformation matrix according to the six-degree-of-freedom variation;
performing rigid body transformation on points under the coordinate system of the second frame of point cloud data according to the pose transformation matrix to obtain transformation points under the coordinates of the first frame of point cloud data;
and acquiring coordinate data of the transformation points, searching a point cloud segment with the minimum distance value with the transformation points according to the coordinate data of the transformation points, and obtaining the point cloud segment corresponding to the transformation points.
In one embodiment, the obtaining a target pose transformation matrix according to the flatness value of the point cloud segment, the coordinate data of four vertices of the rectangular structure corresponding to the point cloud segment, and the coordinate data of the transformation point includes:
obtaining a square distance value according to the coordinate data of the transformation point, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and a distance square function;
obtaining a likelihood function value of a likelihood function generated based on the pose transformation matrix according to the ellipticity value and the square distance value of the point cloud fragment;
and acquiring a pose transformation matrix input into the likelihood function when the likelihood function value is the maximum value to obtain a target pose transformation matrix.
In a second aspect, an embodiment of the present invention further provides a storage medium having a plurality of instructions stored thereon, where the instructions are adapted to be loaded and executed by a processor to implement any one of the above steps of a point cloud registration method using rectangle and ellipticity information.
The invention has the beneficial effects that: according to the method and the device, the first frame of point cloud data is segmented by using a clustering method, so that the first frame of point cloud data can be increased according to a certain sequence, and then the flatness information of the segmented point cloud segments is acquired to match the two frames of point cloud data, so that the structure of the point cloud segments is accurately expressed by the flatness information, and the stability and the reliability of point cloud registration in scene transformation are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a point cloud registration method using rectangle and ellipticity information according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of acquiring a point cloud segment according to an embodiment of the present invention.
Fig. 3 is a schematic flowchart of acquiring a flatness value of a point cloud segment according to an embodiment of the present invention.
Fig. 4 is a schematic flowchart of acquiring coordinates of four vertices of a rectangular structure according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a rectangular structure provided by an embodiment of the present invention;
fig. 6 is a schematic flowchart of the registration of the first frame point cloud data and the second frame point cloud data according to the embodiment of the present invention.
Fig. 7 is a reference diagram for calculating a square value of a distance from a transformation point to a rectangular structure according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of two partially overlapped frames of point cloud data according to an embodiment of the present invention.
Fig. 9 is a diagram illustrating an effect of registration of two frames of partially overlapped point cloud data according to the method of the present invention.
Fig. 10 is a schematic diagram of two frames of partially overlapped point cloud data after adding outliers and noise according to an embodiment of the present invention.
Fig. 11 is an effect diagram after registration of two frames of point cloud data after outliers and noise are added according to the embodiment of the present invention.
Fig. 12 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
Point cloud registration is a fundamental and important topic in three-dimensional computer vision. The method is widely applied to three-dimensional reconstruction, instant positioning and mapping, automatic driving and the like. However, there are many obstacles in practical use. The scholars have done a lot of work on obstacles such as point cloud sparse change, occlusion and partial overlapping, but research on scene transformation is still rare. Scene transformation brings great challenges to point cloud registration, and the reasons for registration failure in the prior art include the following two aspects:
(1) line/plane characteristics are scene critical. The pairing of points with dots is often unreliable because there are not exactly identical pairs of points in an actual laser scan and such pairing is susceptible to noise spots. Pairing points with lines may alleviate the problems with point pairs, but only performs well in two-dimensional registration. In order to be popularized to the three-dimensional registration problem, the pairing of the point and the surface takes the plane normal vector and the curvature information of the scanning line into consideration, so that the registration precision is improved. However, the point-surface pairing is very dependent on the precision of the local normal vector, and the precision of the normal vector is limited by the sparsity of the point cloud.
(2) A feature expression based on the point distribution may be trapped in false assumptions. For example, the prior art cuts the three-dimensional space into a grid structure, assuming that the set of points within the grid is bell-shaped, fitting to a normal distribution. But such distribution assumptions are often inaccurate and susceptible to grid size. Too small a mesh reduces the convergence domain of the algorithm, while too large a mesh blurs the details, reducing the registration accuracy.
In view of the lack of a point cloud registration method in application and scene transformation in the prior art, the invention provides a point cloud registration method using rectangle and ellipticity information, which can truly express the structure of a point cloud through the rectangle and ellipticity information, thereby accurately realizing the registration between two frames of point cloud data.
As shown in fig. 1, the method comprises the steps of:
s100, acquiring first frame point cloud data, and clustering the first frame point cloud data to obtain a point cloud segment.
Specifically, in this embodiment, first frame point cloud data that needs to be registered is obtained, and the first frame point cloud data is clustered according to a preset clustering algorithm, where the clustering algorithm includes a specific constraint condition, so that the first frame point cloud data is cut into different point cloud segments.
In one implementation, as shown in fig. 2, the step S100 specifically includes the following steps:
step S110, acquiring first frame point cloud data, and taking a preset radius as a sphere in the first frame point cloud data to obtain a test sphere;
step S120, obtaining points in the test sphere to obtain neighborhood points, and taking the neighborhood points as the sphere center and the preset radius as the sphere to obtain a neighborhood sphere;
step S130, acquiring the number of points in the neighborhood sphere, the oblateness value of the point set in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and clustering the first frame of point cloud data according to the oblateness value of the point set and the projection distance of the number of the points to obtain a point cloud segment.
First, in this embodiment, a test point is randomly selected from the first frame point cloud data for registration, and the test point is used as a sphere center and a preset radius is used as a sphere to obtain a test sphere. And then obtaining points in the test sphere to obtain neighborhood points. In order to determine which of the neighborhood points should be clustered with the test point into the same type of point set, in this embodiment, each neighborhood point is taken as a sphere center, and the preset radius is taken as a sphere to obtain a neighborhood sphere. And then acquiring the number of points in the neighborhood sphere, the oblateness value of a point set in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and determining the neighborhood point which should be clustered with the test point into a point set of the same type according to the oblateness value of the point set and the projection distance according to the number of the points, thereby segmenting the first frame of point cloud data. It should be noted that all the points in the test sphere region are neighborhood points, that is, there are a plurality of neighborhood points rather than only one, in one mode, for each neighborhood point, the neighborhood point is taken as the sphere center, the preset radius is taken as the sphere, a new neighborhood sphere is obtained again, and clustering is performed according to the above steps, that is, the step is an iterative process, and clustering is not performed on the point set which has been subjected to clustering until all the points in the first frame of point cloud data are traversed.
In an implementation manner, in order to implement accurate segmentation of the first frame point cloud data, in this embodiment, first, the number of points and coordinates of the points in the neighborhood sphere are obtained, and according to the number of points and the coordinates of the points in the neighborhood sphere, an oblateness value of a point set in the neighborhood sphere is obtained. And simultaneously acquiring the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame point cloud data. And then respectively comparing the number of points in the neighborhood sphere, the projection distance, the oblateness value of the point set in the neighborhood sphere with a point threshold, a projection distance threshold and a oblateness value threshold of the point set, and clustering the neighborhood points and the test points to obtain point cloud segments when the number of points in the neighborhood sphere, the projection distance and the oblateness value of the point set in the neighborhood sphere are respectively in the point threshold, the projection distance threshold and the oblateness value threshold of the point set.
In short, the point cloud area can be increased according to the wired sequence of the surface structure, the line structure and other structures through three constraint conditions, so that the first frame of point cloud data is correctly segmented. The first constraint condition is the number of points in the neighborhood sphere, the number of points in the neighborhood sphere is the point cloud density in the neighborhood sphere region, and the point cloud separated in space can be correctly segmented according to the constraint condition, including outliers. However, the spatially continuous turning lines cannot be correctly segmented only by using the constraint condition of the point cloud density, so that the implementation adds a second constraint condition, that is, the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame point cloud data represents the offset degree of the projection on the normal vector to a certain extent, so that the projection distance is limited, that is, the segmentation direction of the point cloud is constrained, and after the constraint condition is added, the spatially continuous turning lines can be well separated. However, only by using density constraint and direction constraint, the separation point and the turning point may not be truly corresponding, so the third constraint condition is added in this embodiment, that is, the oblateness value of the point set in the neighborhood sphere represents the oblateness of the point cloud in the neighborhood sphere region. After the constraint condition is added, the separation point and the turning point can be correctly corresponded, so that an ideal segmentation effect is obtained.
In one implementation, to obtain the oblateness values of the point sets in the neighborhood sphere, the embodiment obtains the arithmetic mean coordinates in the neighborhood sphere according to the number of points in the neighborhood sphere and the coordinates of the points by first obtaining the number of points and the coordinates of the points in the neighborhood sphere. And then obtaining a first scatter matrix corresponding to the neighborhood sphere according to the number of the points in the neighborhood sphere, the coordinates of the points and the arithmetic mean coordinate. And finally, acquiring a first eigenvalue and a third eigenvalue of the first scattering matrix, and obtaining the ellipticity value of the point set in the neighborhood sphere according to the first eigenvalue and the third eigenvalue of the first scattering matrix. The eigenvalues of the scatter matrix are an inherent property of the matrix and, in one implementation, may be obtained using SVD decomposition. Generally, when processing a three-dimensional point cloud, the scatter matrix is a 3X3 matrix, which includes three eigenvalues, namely a first eigenvalue, a second eigenvalue, and a third eigenvalue, in order of magnitude.
The normal calculation procedure for the flatness value is as follows: it is assumed that all points within a point cloud region fall within an ellipsoid, i.e. every point p (x, y, z) within the region satisfies the following formula (1):
Figure 857052DEST_PATH_IMAGE001
(1)
wherein, a, b, c are the lengths of three half axes of the ellipsoid respectively, and the ellipticity f of the point cloud segment is defined as the following formula (2):
Figure 102089DEST_PATH_IMAGE002
(2)
the following formula (3) can be obtained by expressing the formula 1 in a matrix form as shown below, and the formulas (4), (5) can be obtained from the formula (3):
Figure 147405DEST_PATH_IMAGE003
(3)
Figure 745877DEST_PATH_IMAGE004
(4)
Figure 825828DEST_PATH_IMAGE005
(5)
wherein
Figure 976187DEST_PATH_IMAGE006
And m represents the number of points and mu represents the arithmetic mean coordinate of all the points as a scatter matrix of the point cloud fragment. In order to save the computation time, in this embodiment, a dispersion matrix of the point set in the neighborhood sphere is obtained instead of a conventional manner of computing the oblateness value, and the oblateness value of the region is obtained according to the first eigenvalue and the third eigenvalue of the dispersion matrix, that is, the oblateness value is obtained according to formula 6. The equation 6 is as follows:
Figure 243220DEST_PATH_IMAGE007
(6)
where λ L, λ S are the maximum and minimum eigenvalues of the scatter matrix, i.e. the first and third eigenvalues of the scatter matrix, respectively.
In an implementation manner, in order to ensure the reliability of the test point, after the test point is selected, the number of points in the test sphere and the ellipticity value of the point set need to be obtained. And comparing the number of the points with a preset number of points, judging the test point as an invalid test point when the number of the points is less than the preset number of points, and reselecting the test point from the first frame of point cloud data. And comparing the flatness value of the point set with a preset flatness value of the point set, and when the flatness value of the point set is smaller than the preset flatness value of the point set, similarly determining the test point as an invalid test point, and reselecting the test point from the first frame point cloud data. And the test point can be used as an effective test point only when the number of the points is more than or equal to the preset number of points and the oblateness value of the point set is more than or equal to the preset oblateness value of the point set.
After the first frame of point cloud data is aggregated to obtain a point cloud segment, the method further comprises the following steps:
and S200, acquiring a flatness value of the point cloud segment.
Specifically, in order to obtain the flatness value of the point cloud segment, in an implementation manner, as shown in fig. 3, the step S200 specifically includes the following steps:
step S210, acquiring the number of points and the coordinates of the points in the point cloud segment, and obtaining arithmetic mean coordinates according to the number of the points and the coordinates of the points in the point cloud segment;
step S220, obtaining a second scatter matrix corresponding to the point cloud segment according to the number of the points in the point cloud segment, the coordinates of the points and the arithmetic mean value coordinate;
step S230, obtaining the first eigenvalue and the third eigenvalue of the second scattering matrix, and obtaining the flatness value of the point cloud segment according to the first eigenvalue and the third eigenvalue of the second scattering matrix.
In this embodiment, the number of points and the coordinates of the points in the point cloud segment are first obtained, and an arithmetic mean coordinate is obtained according to the number of the points and the coordinates of the points in the point cloud segment. And then obtaining a second scatter matrix corresponding to the point cloud segment according to the number of the points in the point cloud segment, the coordinates of the points and the arithmetic mean value coordinate. And finally, acquiring a first eigenvalue and a third eigenvalue of the second scattering matrix, and obtaining the flatness value of the rectangular structure according to the first eigenvalue and the third eigenvalue of the second scattering matrix. This step is similar to the above-described process of obtaining the ellipticity value of the point set in the neighborhood sphere region, and will not be described again here.
In addition, a rectangle fitting needs to be performed on the segmented point cloud segment, as shown in fig. 1, the method further includes the following steps:
and S300, performing rectangle fitting on the point cloud segment to obtain a rectangular structure, and acquiring four vertex coordinates of the rectangular structure.
In this embodiment, after the first frame of point cloud data is divided into a plurality of point cloud segments, each point cloud segment needs to be subjected to rectangle fitting to obtain a rectangular structure. In order to obtain coordinates of four vertices of the rectangular structure, in an implementation manner, as shown in fig. 4, the step S300 specifically includes the following steps:
s310, performing rectangle fitting on the point cloud segment to obtain a rectangular structure;
step S320, obtaining a second eigenvalue of the second dispersion matrix;
and S330, obtaining coordinate data of four vertexes of the rectangular structure according to the coordinates of the points in the point cloud segment, the arithmetic mean coordinates, the first characteristic value and the second characteristic value.
Specifically, rectangular fitting is performed on the point cloud segment, that is, the coordinate of each mark point in fig. 5 is calculated, and the coordinate of each mark point in fig. 5 is obtained mainly through the following formulas (7) to (14):
Figure 707699DEST_PATH_IMAGE008
(7)
Figure 579840DEST_PATH_IMAGE009
(8)
Figure 838783DEST_PATH_IMAGE010
(9)
Figure 921009DEST_PATH_IMAGE011
(10)
Figure 658021DEST_PATH_IMAGE012
(11)
Figure 446985DEST_PATH_IMAGE013
(12)
Figure 80092DEST_PATH_IMAGE014
(13)
Figure 587297DEST_PATH_IMAGE015
(14)
wherein pi represents the ith point, μ is an arithmetic mean coordinate of the point cloud fragment, v1 and v2 are feature vectors corresponding to the first feature value and the second feature value of the scattering matrix Σ, and a, b, c and d are coordinates of four vertexes of the rectangular structure.
In order to achieve accurate registration of two frames of point cloud data, as shown in fig. 1, the method further includes the following steps:
and S400, acquiring second frame point cloud data, and registering the first frame point cloud data and the second frame point cloud data according to the four vertex coordinates of the rectangular structure and the oblateness value of the point cloud segment.
The method comprises the steps of firstly obtaining a second frame of point cloud needing point cloud registration, and then obtaining the optimal condition for registering two frames of point cloud data according to four vertex coordinates of the rectangular structure and the oblateness value of the point cloud segment so as to register the two frames of point cloud data.
In one implementation, as shown in fig. 6, the step S400 specifically includes the following steps:
s410, acquiring second frame point cloud data and six-degree-of-freedom transformation quantity in an iteration process;
step S420, obtaining coordinate data of transformation points under the coordinates of the first frame of point cloud data generated based on points under the coordinate system of the second frame of point cloud data according to the six-degree-of-freedom variation, and determining point cloud segments corresponding to the transformation points;
step S430, obtaining a target pose transformation matrix according to the flatness value of the point cloud segment, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and the coordinate data of the transformation point;
and step S340, registering the second frame of point cloud data and the first frame of point cloud data according to the target pose transformation matrix.
In order to realize the registration between the first frame point cloud data and the second frame point cloud data, the embodiment first needs to acquire the second frame point cloud data and the six-degree-of-freedom transformation amount in the iterative process. In particular, the spatial complex motion of any object can be simplified into translation and rotation in space, and the translation and rotation in space can be specifically expressed as translation in the direction of X, Y, Z and rotation around the axis X, Y, Z, namely, transformation of six degrees of freedom. Therefore, when the transformation amount of a certain point under the coordinate system of a certain frame of point cloud data on six degrees of freedom in the iteration process is determined, the corresponding point of the point under the coordinate system of another frame of point cloud data after iteration, namely the transformation point, can be obtained. Therefore, the point under the coordinate system of the second frame of point cloud data is transformed according to the six-degree-of-freedom variable quantity to obtain a transformation point under the coordinate of the first frame of point cloud data. And then searching the point cloud segment corresponding to the transformation point according to a preset condition to obtain a target point cloud segment.
In one implementation, the step of obtaining, according to the six-degree-of-freedom variation, coordinate data of a transformation point in the coordinate of the first frame point cloud data generated based on a point in the coordinate system of the second frame point cloud data, and determining a point cloud segment corresponding to the transformation point specifically includes: obtaining a pose transformation matrix according to the six-degree-of-freedom variation; and performing rigid body transformation on the point under the coordinate system of the second frame of point cloud data according to the pose transformation matrix to obtain a transformation point under the coordinate of the first frame of point cloud data. And then acquiring coordinate data of the transformation points, and searching a point cloud segment with the minimum distance value with the transformation points according to the coordinate data of the transformation points to obtain the point cloud segment corresponding to the transformation points.
For example, assuming that a point p = [ px, py, pz ] T in the second frame point cloud data and a six-degree-of-freedom transformation amount ξ = [ tx, ty, tz, α, β, γ ] T in the iterative process are known, a transformed point p 'is calculated by rigid body transformation T (ξ), and the point p' is considered to be in the coordinate system of the reference point cloud. The specific process is shown in formula (15):
Figure 190316DEST_PATH_IMAGE016
(15)
and then obtaining a target pose transformation matrix according to the flatness value of the point cloud segment, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and the coordinate data of the transformation point. In short, the final objective of this embodiment is to optimize the pose transformation matrix of two frames of point cloud data, and the pose transformation matrix obtained after optimization is the target pose transformation matrix, which can realize accurate registration between two frames of point cloud data.
In order to obtain the target pose transformation matrix, in an implementation manner, in this embodiment, a squared distance value is first obtained according to the coordinate data of the transformation point, the coordinate data of four vertices of the rectangular structure corresponding to the point cloud segment, and a distance squared function, then a likelihood function value of a likelihood function generated based on the pose transformation matrix is obtained according to a flat rate value of the point cloud segment and the squared distance value, and finally, a pose transformation matrix input to the likelihood function is obtained when the likelihood function value is a maximum value, so as to obtain a target pose transformation matrix.
For example, as shown in fig. 7, assuming that a rectangular structure with four coordinate points a, b, c, and d as vertices is closest to the transformation point, a rectangle is obtained according to the four coordinate points a, b, c, and d, and four sides of the rectangle are extended to obtain a dotted line in the drawing, and the dotted line is extended into a plane in the direction of a normal vector of the plane where the rectangle is located, so that the space is finally divided into 9 mutually non-overlapping subspaces. The dots in the figure represent the transformed points p 'that fall in different subspaces, the arrow points to the point on the rectangle closest to p', and the length of the arrow represents the point-to-rectangle distance. Specifically, the distances from a point to a rectangle can be classified into the following three categories:
1) point-to-point distance: in the subspace 2, 3, 4, 5, the closest point of p 'is the vertex a, b, c, d of the rectangle, so the distance from point p' to vertex is taken as the point-to-rectangle distance.
2) Distance from point to line: in subspaces 6, 7, 8, 9, the closest point of p 'becomes the foot of the foot from its closest rectangular side, so the distance of point p' to the closest rectangular side line is taken as the point-to-rectangular distance.
3) Distance from point to plane: in subspace 1, the closest point of p 'becomes its projected point on the plane of the rectangle along the normal vector, so the distance of point p' to the plane of the rectangle is taken as the point-to-rectangle distance.
Therefore, the squared distance value D can be solved by substituting the coordinates of the transformed points into a distance square function D (p '), where the following equation 16 is the distance square function D (p'):
as shown in the following equation (16), the distance squared function D (p ') is obtained by substituting the point p' obtained by the rigid body transformation:
Figure 833787DEST_PATH_IMAGE017
(16)
wherein, a, b, c, d are four vertexes of the rectangle in the figure, e1, e2 are unit direction vectors of adjacent sides of the rectangle in the figure, and r1, r2 represent the dividing planes in fig. 7. And then obtaining a likelihood function value of a likelihood function generated based on the pose transformation matrix according to the ellipticity value of the point cloud segment and the square distance value.
For example, the likelihood value of the point is obtained by substituting the squared distance d into a medium likelihood function l (d), i.e., the following equation (17):
Figure 434533DEST_PATH_IMAGE018
(17)
wherein the content of the first and second substances,
Figure 366717DEST_PATH_IMAGE019
is the flatness value. Accumulating the likelihood function values of all the points to obtain the final likelihood value as shown in the following formula (18):
Figure 711110DEST_PATH_IMAGE020
(18)
and Pk +1 is second frame point cloud data, and Rk represents that the oblateness information is extracted from the first frame point cloud data. And finally, acquiring a pose transformation matrix input into the likelihood function when the likelihood function value is the maximum value, and acquiring a target pose transformation matrix.
In mathematical statistics, a likelihood function is a function of parameters in a statistical model, representing the likelihood in the model parameters. Likelihood functions play a significant role in statistical inference, where "likelihood" is synonymous with "likelihood" or "probability", and refers to the likelihood of an event occurring, but in statistics, there is a clear distinction between "likelihood" and "likelihood" or "probability". Probabilities are used to predict the results of subsequent observations given some parameters, while likelihoods are used to estimate parameters about the nature of things given some observations. In short, in this embodiment, the likelihood function is used to determine the optimal pose transformation matrix when the first frame point cloud data and the second frame point cloud data are transformed, and when the function value of the likelihood function is larger, the probability that the transformation point obtained after the point in the second frame point cloud data passes through the pose transformation matrix is correct is larger, and vice versa, so that the input pose transformation matrix when the likelihood function value is maximum is used as the target pose transformation matrix to achieve accurate registration between the two frames of point cloud data.
In order to obtain the pose transformation matrix input when the likelihood function value is maximum, this embodiment provides an analytic expression of the final first derivative of the likelihood function, and converts the problem of solving the maximum value of the likelihood function into solving an unconstrained minimization problem, as shown in the following equation (19):
Figure 536984DEST_PATH_IMAGE021
(19)
from the chain rule of the functional differentiation, the first derivative of the likelihood function can be solved for equation (20):
Figure 308631DEST_PATH_IMAGE022
(20)
more specifically, each sub-function in the composite function
Figure 524849DEST_PATH_IMAGE023
Figure 610616DEST_PATH_IMAGE024
Figure 228679DEST_PATH_IMAGE025
The derivative and derivation process of (c) is as follows:
Figure 171228DEST_PATH_IMAGE026
Figure 937058DEST_PATH_IMAGE027
Figure 623255DEST_PATH_IMAGE028
Figure 299087DEST_PATH_IMAGE030
Figure 412536DEST_PATH_IMAGE031
Figure 603346DEST_PATH_IMAGE032
Figure 155550DEST_PATH_IMAGE033
finally, the first derivative of the total likelihood function is the sum of the likelihood function derivatives for each point, as shown in equation (21) below:
Figure 482626DEST_PATH_IMAGE034
(21)
in combination with the above formula, an analytical expression of the likelihood function and its first derivative can be obtained, which can prove to be continuously derivable. Therefore, the common mathematical optimization method can be adopted to solve
Figure 970239DEST_PATH_IMAGE035
Is equivalent to solving a likelihood function
Figure 648345DEST_PATH_IMAGE036
Is measured. In one implementation, to obtain a faster convergence rate, the solution may continue
Figure 4240DEST_PATH_IMAGE037
And the minimum value is solved by newton's method. Although the second derivative is continuously derivable only in each subspace and has jump on the spatial interface, the probability of the point falling on the interface is negligible and cannot be ignored due to the dispersion of the point cloudAffecting the optimization effect. This embodiment also provides solving by newton's method, as shown in the following equation (22)
Figure 451402DEST_PATH_IMAGE038
The solution method of the Hessian matrix required in the second derivative of (1) is as follows:
Figure 641075DEST_PATH_IMAGE039
(22)
as shown in fig. 8, the graph is a diagram of two partially overlapped point clouds of two frames of a selected city street scene, and fig. 9 is an effect diagram obtained by the method provided by the present invention, so that it can be seen that the two frames of point clouds are effectively registered. In addition, as shown in fig. 10, images after outliers and noise are added to two partially overlapped frame point clouds, and fig. 11 shows that the two frame point clouds after outliers and noise are added are registered by using the method provided by the present invention, and it can be seen that successful registration can still be performed, so the method also has higher robustness to noise in point cloud registration.
Based on the above embodiments, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 12. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program, when executed by a processor, implements a method of point cloud registration using rectangle and ellipticity information. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 12 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
In one implementation, one or more programs are stored in a memory of the smart terminal and configured to be executed by one or more processors include instructions for performing a method of point cloud registration utilizing rectangle and ellipticity information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention provides a point cloud registration method using rectangle and ellipticity information, that is, a storage medium, the method includes: acquiring first frame point cloud data, and clustering the first frame point cloud data to obtain point cloud segments; acquiring a flat rate value of the point cloud segment; performing rectangle fitting on the point cloud segment to obtain a rectangular structure, and acquiring four vertex coordinates of the rectangular structure; and acquiring second frame point cloud data, and registering the first frame point cloud data and the second frame point cloud data according to the four vertex coordinates of the rectangular structure and the oblateness value of the point cloud segment. According to the method, the first frame of point cloud data is segmented by using a clustering method, so that the first frame of point cloud data can be increased according to a certain sequence, and then the flatness information of the segmented point cloud segments is acquired to match the two frames of point cloud data, so that the structure of the point cloud segments is accurately expressed by the flatness information, and the stability and reliability of point cloud registration in scene transformation are improved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (8)

1. A point cloud registration method using rectangle and ellipticity information, the method comprising:
acquiring first frame point cloud data, and clustering the first frame point cloud data to obtain point cloud segments;
acquiring a flat rate value of the point cloud segment;
performing rectangle fitting on the point cloud segment to obtain a rectangular structure, and acquiring coordinate data of four vertexes of the rectangular structure;
acquiring second frame point cloud data, and registering the first frame point cloud data and the second frame point cloud data according to coordinate data of four vertexes of the rectangular structure and the oblateness value of the point cloud segment;
the method comprises the following steps of obtaining first frame point cloud data, clustering the first frame point cloud data to obtain point cloud segments:
acquiring first frame point cloud data, randomly selecting a test point in the first frame point cloud data, and taking the test point as a sphere center and a preset radius as a sphere to obtain a test sphere;
obtaining points in the test sphere to obtain neighborhood points, and taking the neighborhood points as the sphere center and the preset radius as the sphere to obtain a neighborhood sphere;
acquiring the number of points in the neighborhood sphere, the oblateness value of a point set in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and clustering the first frame of point cloud data according to the number of the points, the oblateness value of the point set and the projection distance to obtain a point cloud segment;
the acquiring of the second frame of point cloud data, and the registering of the first frame of point cloud data and the second frame of point cloud data according to the coordinate data of the four vertexes of the rectangular structure and the oblateness value of the point cloud segment comprises:
acquiring second frame point cloud data and six-degree-of-freedom variation in an iterative process;
obtaining coordinate data of a transformation point under the coordinate of the first frame of point cloud data generated based on a point under the coordinate system of the second frame of point cloud data according to the six-degree-of-freedom variation, and determining a point cloud segment corresponding to the transformation point;
obtaining a target pose transformation matrix according to the flatness value of the point cloud segment, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and the coordinate data of the transformation point;
and registering the second frame of point cloud data and the first frame of point cloud data according to the target pose transformation matrix.
2. The method of claim 1, wherein the obtaining the number of points in the neighborhood sphere, the ellipticity value of the set of points in the neighborhood sphere, and the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame of point cloud data, and the clustering the first frame of point cloud data according to the number of points, the ellipticity value of the set of points, and the projection distance to obtain the point cloud segment comprises:
acquiring the number of points and the coordinates of the points in the neighborhood sphere, and acquiring the oblateness value of the point set in the neighborhood sphere according to the number of the points and the coordinates of the points in the neighborhood sphere;
acquiring the projection distance of the distance between the neighborhood point and the test point on the normal vector of the plane corresponding to the first frame point cloud data;
respectively comparing the number of points in the neighborhood sphere, the projection distance, and the oblateness value of the point set in the neighborhood sphere with a point number threshold, a projection distance threshold, and a oblateness value threshold of the point set;
and when the number of the points in the neighborhood sphere, the projection distance and the oblateness value of the point set in the neighborhood sphere are respectively in the point number threshold, the projection distance threshold and the oblateness value threshold of the point set, clustering the neighborhood points and the test points to obtain point cloud segments.
3. The method of claim 2, wherein the obtaining the number of points and coordinates of the points in the neighborhood sphere, and obtaining the ellipticity value of the set of points in the neighborhood sphere according to the number of points and coordinates of the points in the neighborhood sphere comprises:
acquiring the number of points and the coordinates of the points in the neighborhood sphere, and acquiring the arithmetic mean coordinates in the neighborhood sphere according to the number of the points and the coordinates of the points in the neighborhood sphere;
obtaining a first scattering matrix corresponding to the neighborhood sphere according to the number of the points in the neighborhood sphere, the coordinates of the points and the arithmetic mean coordinate;
acquiring a first eigenvalue and a third eigenvalue of the first scatter matrix, and obtaining a flat rate value of a point set in the neighborhood sphere according to the first eigenvalue and the third eigenvalue of the first scatter matrix;
the flatness value is calculated as follows: it is assumed that all points within a point cloud region fall within an ellipsoid, i.e. every point p (x, y, z) within the region satisfies the following formula (1):
Figure 665962DEST_PATH_IMAGE001
(1)
wherein a, b, c are the lengths of three half axes of an ellipsoid, and the following formula (3) can be obtained by expressing the formula 1 in a matrix form as shown below, and the formulas (4), (5) can be obtained according to the formula (3):
Figure 752736DEST_PATH_IMAGE003
(3)
Figure DEST_PATH_IMAGE005
(4)
Figure 709059DEST_PATH_IMAGE006
(5)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE007
the point cloud fragment is a scatter matrix of the point cloud fragment, m represents the number of points, and mu represents the arithmetic mean coordinate of all the points; acquiring a scatter matrix of the point set in the neighborhood sphere, and acquiring a flatness value of the area according to a first eigenvalue and a third eigenvalue of the scatter matrix, namely acquiring the flatness value according to a formula 6; the equation 6 is as follows:
Figure 138291DEST_PATH_IMAGE008
(6)
where λ L, λ S are the maximum and minimum eigenvalues of the scatter matrix, i.e. the first and third eigenvalues of the scatter matrix, respectively.
4. The method of claim 1, wherein the obtaining the ellipticity values of the point cloud segments comprises:
acquiring the number of points and the coordinates of the points in the point cloud segment, and acquiring arithmetic mean coordinates according to the number of the points and the coordinates of the points in the point cloud segment;
obtaining a second scatter matrix corresponding to the point cloud segment according to the number of the points in the point cloud segment, the coordinates of the points and the arithmetic mean value coordinate;
and acquiring a first eigenvalue and a third eigenvalue of the second scatter matrix, and obtaining the flatness value of the point cloud segment according to the first eigenvalue and the third eigenvalue of the second scatter matrix.
5. The point cloud registration method using rectangle and ellipticity information according to claim 4, wherein the performing rectangle fitting on the point cloud segments to obtain a rectangular structure, and obtaining coordinate data of coordinates of four vertices of the rectangular structure comprises:
performing rectangle fitting on the point cloud segment to obtain a rectangular structure;
acquiring a second eigenvalue of the second dispersion matrix;
obtaining coordinate data of four vertexes of the rectangular structure according to the coordinates of the points in the point cloud segment, the arithmetic mean coordinates, the first characteristic value and the second characteristic value;
the calculation process of the coordinate data of the four vertices in the rectangular structure is as follows: assuming that four vertexes are a, b, c and d respectively; the arithmetic mean value coordinate of the point cloud fragment is mu; m is a point on a connecting line a and b, e is a point on a connecting line b and c, n is a point on a connecting line c and d, and f is a point on the connecting line a and d, wherein the connecting line m and n is mutually vertical to the connecting line e and f and passes through mu; coordinates of a, b, c, d are obtained by the following equations (7) to (14):
Figure 905390DEST_PATH_IMAGE009
(7)
Figure 823667DEST_PATH_IMAGE010
(8)
Figure 583682DEST_PATH_IMAGE011
(9)
Figure 677540DEST_PATH_IMAGE012
(10)
Figure 740174DEST_PATH_IMAGE013
(11)
Figure 4802DEST_PATH_IMAGE014
(12)
Figure 709453DEST_PATH_IMAGE015
(13)
Figure 923396DEST_PATH_IMAGE016
(14)
wherein pi represents the ith point, and v1 and v2 are eigenvectors corresponding to the first eigenvalue and the second eigenvalue of the dispersion matrix Σ, respectively.
6. The method of claim 1, wherein obtaining coordinate data of transformation points in coordinates of the first frame of point cloud data generated based on points in a coordinate system of the second frame of point cloud data according to the six-degree-of-freedom variance and determining point cloud segments corresponding to the transformation points comprises:
obtaining a pose transformation matrix according to the six-degree-of-freedom variation;
performing rigid body transformation on points under the coordinate system of the second frame of point cloud data according to the pose transformation matrix to obtain transformation points under the coordinates of the first frame of point cloud data;
and acquiring coordinate data of the transformation points, searching a point cloud segment with the minimum distance value with the transformation points according to the coordinate data of the transformation points, and obtaining the point cloud segment corresponding to the transformation points.
7. The point cloud registration method using rectangle and flat rate information according to claim 1, wherein the obtaining a target pose transformation matrix according to the flat rate values of the point cloud segments, the coordinate data of the four vertices of the rectangular structure corresponding to the point cloud segments, and the coordinate data of the transformation points comprises:
obtaining a square distance value according to the coordinate data of the transformation point, the coordinate data of four vertexes of the rectangular structure corresponding to the point cloud segment and a distance square function;
obtaining a likelihood function value of a likelihood function generated based on the pose transformation matrix according to the ellipticity value and the square distance value of the point cloud fragment;
and acquiring a pose transformation matrix input into the likelihood function when the likelihood function value is the maximum value to obtain a target pose transformation matrix.
8. A storage medium having stored thereon instructions adapted to be loaded and executed by a processor to perform the steps of a method for point cloud registration using rectangle and ellipticity information according to any of claims 1 to 7.
CN202011146501.2A 2020-10-23 2020-10-23 Point cloud registration method and storage medium by utilizing rectangle and oblateness information Active CN112308889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011146501.2A CN112308889B (en) 2020-10-23 2020-10-23 Point cloud registration method and storage medium by utilizing rectangle and oblateness information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011146501.2A CN112308889B (en) 2020-10-23 2020-10-23 Point cloud registration method and storage medium by utilizing rectangle and oblateness information

Publications (2)

Publication Number Publication Date
CN112308889A CN112308889A (en) 2021-02-02
CN112308889B true CN112308889B (en) 2021-08-31

Family

ID=74327393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011146501.2A Active CN112308889B (en) 2020-10-23 2020-10-23 Point cloud registration method and storage medium by utilizing rectangle and oblateness information

Country Status (1)

Country Link
CN (1) CN112308889B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581481B (en) * 2022-03-07 2023-08-25 广州小鹏自动驾驶科技有限公司 Target speed estimation method and device, vehicle and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493407A (en) * 2018-11-19 2019-03-19 腾讯科技(深圳)有限公司 Realize the method, apparatus and computer equipment of laser point cloud denseization
WO2020055272A1 (en) * 2018-09-12 2020-03-19 Auckland Uniservices Limited Methods and systems for ocular imaging, diagnosis and prognosis
CN111524174A (en) * 2020-04-16 2020-08-11 上海航天控制技术研究所 Binocular vision three-dimensional construction method for moving target of moving platform
CN111539432A (en) * 2020-03-11 2020-08-14 中南大学 Method for extracting urban road by using multi-source data to assist remote sensing image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892506B2 (en) * 2015-05-28 2018-02-13 The Florida International University Board Of Trustees Systems and methods for shape analysis using landmark-driven quasiconformal mapping
US10665035B1 (en) * 2017-07-11 2020-05-26 B+T Group Holdings, LLC System and process of using photogrammetry for digital as-built site surveys and asset tracking
CN109116321B (en) * 2018-07-16 2019-09-24 中国科学院国家空间科学中心 A kind of phase filtering method and height measurement method of spaceborne interference imaging altimeter

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020055272A1 (en) * 2018-09-12 2020-03-19 Auckland Uniservices Limited Methods and systems for ocular imaging, diagnosis and prognosis
CN109493407A (en) * 2018-11-19 2019-03-19 腾讯科技(深圳)有限公司 Realize the method, apparatus and computer equipment of laser point cloud denseization
CN111539432A (en) * 2020-03-11 2020-08-14 中南大学 Method for extracting urban road by using multi-source data to assist remote sensing image
CN111524174A (en) * 2020-04-16 2020-08-11 上海航天控制技术研究所 Binocular vision three-dimensional construction method for moving target of moving platform

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Evaluating point cloud accuracy of static three-dimensional laser scanning based on point cloud error ellipsoid model;Xijiang Chen 等;《Journal of Applied Remote Sensing》;20151105;第9卷(第1期);第095991-1-095991-14页 *
Robust Multisource Remote Sensing Image Registration Method Based on Scene Shape Similarity;Hao Ming 等;《Photogrammetric Engineering & Remote Sensing》;20191031;第85卷(第10期);第725-736页 *
集成ICP和NDT的地面激光扫描点云渐进配准方法;付怡然;《中国优秀硕士学位论文全文数据库信息科技辑》;20180915;第I135-71页 *

Also Published As

Publication number Publication date
CN112308889A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US20210082181A1 (en) Method and apparatus for object detection, intelligent driving method and device, and storage medium
CN107230225B (en) Method and apparatus for three-dimensional reconstruction
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US10872227B2 (en) Automatic object recognition method and system thereof, shopping device and storage medium
WO2018098891A1 (en) Stereo matching method and system
CN110084299B (en) Target detection method and device based on multi-head fusion attention
CN112651490A (en) Training method and device for face key point detection model and readable storage medium
CN115546519B (en) Matching method of image and millimeter wave radar target for extracting pseudo-image features
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN112308889B (en) Point cloud registration method and storage medium by utilizing rectangle and oblateness information
CN114332125A (en) Point cloud reconstruction method and device, electronic equipment and storage medium
CN115375836A (en) Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering
CN113436223B (en) Point cloud data segmentation method and device, computer equipment and storage medium
Lopez-Rubio et al. A fast robust geometric fitting method for parabolic curves
CN111709269B (en) Human hand segmentation method and device based on two-dimensional joint information in depth image
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN115239899B (en) Pose map generation method, high-precision map generation method and device
WO2023164933A1 (en) Building modeling method and related apparatus
López-Rubio et al. Robust fitting of ellipsoids by separating interior and exterior points during optimization
CN114926536A (en) Semantic-based positioning and mapping method and system and intelligent robot
CN114998743A (en) Method, device, equipment and medium for constructing visual map points
CN113544744A (en) Head posture measuring method and device
CN112150546A (en) Monocular vision pose estimation method based on auxiliary point geometric constraint
Shi et al. 3D Vehicle Detection Algorithm Based on Multimodal Decision-Level Fusion.
CN117409209B (en) Multi-task perception three-dimensional scene graph element segmentation and relationship reasoning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant