CN107507127B - Global matching method and system for multi-viewpoint three-dimensional point cloud - Google Patents
Global matching method and system for multi-viewpoint three-dimensional point cloud Download PDFInfo
- Publication number
- CN107507127B CN107507127B CN201710660439.0A CN201710660439A CN107507127B CN 107507127 B CN107507127 B CN 107507127B CN 201710660439 A CN201710660439 A CN 201710660439A CN 107507127 B CN107507127 B CN 107507127B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- matched
- point
- reference point
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000005070 sampling Methods 0.000 claims abstract description 104
- 230000009466 transformation Effects 0.000 claims abstract description 80
- 238000012952 Resampling Methods 0.000 claims abstract description 44
- 239000011159 matrix material Substances 0.000 claims abstract description 41
- 230000001131 transforming effect Effects 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to a global matching method and a system of multi-viewpoint three-dimensional point cloud, comprising the following steps: acquiring a multi-viewpoint three-dimensional structured point cloud of a measured object; resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single viewpoint depth data; and obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched, and obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list. According to the global matching method of the multi-view three-dimensional point cloud, the sampling grid is used for dividing the resampling single-view depth data, the searching speed of the closest point of the point cloud to be matched is increased, the global matching speed is further increased, and even if the multi-view large-scale point cloud is adopted, high-efficiency global matching can be achieved.
Description
Technical Field
The invention relates to the technical field of three-dimensional imaging and modeling, in particular to a global matching method and system of multi-viewpoint three-dimensional point cloud.
Background
In the three-dimensional reconstruction based on the visual method, due to the limitation of the field of view of a sensor and the shielding relation of a measured object, complete three-dimensional depth information can be obtained only by three-dimensionally imaging the object from a plurality of viewpoints, and multi-viewpoint matching is an inevitable key link in the three-dimensional reconstruction.
Generally, an initial value of global matching is obtained by using technologies such as a mechanical control device and camera calibration, and then fine matching is performed by using an Iterative Closest Point (ICP). In the ICP algorithm, a data structure such as a K _ D tree is usually used to search for a corresponding point between a target point cloud and a source point cloud, and under the condition of a large scale of point clouds and a large number of three-dimensional points, the searching efficiency of the corresponding point affects the speed of the whole global matching.
Disclosure of Invention
Based on this, it is necessary to provide an efficient global matching method and system for multi-view three-dimensional point cloud capable of improving the global matching speed.
A global matching method of multi-view three-dimensional point cloud comprises the following steps:
acquiring a multi-viewpoint three-dimensional structured point cloud of a measured object;
resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single viewpoint depth data;
and obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched, and obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list.
In one embodiment, the resampling the three-dimensional structured point cloud of each viewpoint to obtain resampled single viewpoint depth data by the sampling grid comprises:
dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid, and taking the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions as sampling ranges;
obtaining a z coordinate of the sampling grid vertex according to the three-dimensional structured point cloud;
and obtaining the resampling single viewpoint depth data according to the sampling result.
In one embodiment, the step of obtaining the z coordinate at the vertex of the sampling grid according to the three-dimensional structured point cloud is specifically:
determining an effective sampling grid position of the sampling grid according to four adjacent effective points in the three-dimensional structured point cloud;
and calculating the z coordinate at the vertex of the sampling grid by utilizing bilinear interpolation according to the effective sampling grid position.
In one embodiment, the step of obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched includes:
acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data;
transforming the first point cloud to be matched into a local coordinate system of the first reference point cloud by using initial transformation;
according to the positions of each point in the first reference point cloud and the first point cloud to be matched in the sampling grid of the first reference point cloud, obtaining the closest point position of each point in the first point cloud to be matched in the first reference point cloud by utilizing bilinear interpolation, wherein the closest point position comprises a z coordinate;
transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system;
obtaining a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the transformation result;
and obtaining a first rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched.
In one embodiment, the obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched further includes:
any one single-viewpoint depth data of the resampled single-viewpoint depth data is used as a second point cloud to be matched, and any other one single-viewpoint depth data of the resampled single-viewpoint depth data is used as a second reference point cloud;
transforming the second point clouds to be matched into a local coordinate system of each second reference point cloud by using initial transformation;
according to the positions of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, obtaining the closest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation, wherein the closest point position comprises a z coordinate;
transforming the closest point position of each second reference point cloud and the second point cloud to be matched into a global coordinate system;
obtaining a point pair list and a normal vector list of the nearest point position of each second reference point cloud and the second point cloud to be matched according to the transformation result;
and obtaining a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
In one embodiment, the step of obtaining the second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list further includes a step of judging whether a preset matching condition is met, if not, the step of returning any one single viewpoint depth data of the resampled single viewpoint depth data as the second point cloud to be matched, and the step of taking any other one single viewpoint depth data of the resampled single viewpoint depth data as the second reference point cloud continues until the preset matching condition is met.
On the other hand, the invention also provides a global matching system of the multi-view three-dimensional point cloud, which comprises the following steps:
the three-dimensional structured point cloud acquisition module is used for acquiring multi-viewpoint three-dimensional structured point cloud of the measured object;
the resampling single-viewpoint depth data acquisition module is used for resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single-viewpoint depth data;
and the rigid body transformation matrix acquisition module is used for obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched and obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list.
In one embodiment, the rigid body transformation matrix obtaining module includes:
the first point cloud to be matched acquisition module is used for acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data;
the first point cloud to be matched is transformed into a local coordinate system of the first reference point cloud by using initial transformation;
the first point cloud closest point acquisition module is used for obtaining the closest point position of each point in the first point cloud to be matched in the first reference point cloud by utilizing bilinear interpolation according to the first reference point cloud and the position of each point in the first point cloud to be matched in the sampling grid of the first reference point cloud, wherein the closest point position comprises a z coordinate;
the first reference point cloud transformation module is used for transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system;
the first reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the transformation result;
and the first rigid body transformation matrix acquisition module is used for acquiring a first rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched.
In one embodiment, the rigid body transformation matrix obtaining module further includes:
the second point cloud to be matched acquisition module is used for taking any one single viewpoint depth data of the resampled single viewpoint depth data as a second point cloud to be matched, and taking any other one single viewpoint depth data of the resampled single viewpoint depth data as a second reference point cloud;
the second point cloud to be matched is transformed to a local coordinate system of each second reference point cloud by using the initial transformation;
the second point cloud closest point acquisition module is used for obtaining the closest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation according to the position of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, and the closest point position comprises a z coordinate;
the second reference point cloud conversion module is used for converting the closest point position of each second reference point cloud and the second point cloud to be matched into a global coordinate system;
the second reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point position of each second reference point cloud and the second point cloud to be matched according to the transformation result;
and the second rigid body transformation matrix acquisition module is used for acquiring a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
In one embodiment, the resampling single-viewpoint depth data obtaining module includes:
the sampling grid dividing module is used for dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid, and taking the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions as sampling ranges;
the sampling grid vertex z coordinate acquisition module is used for acquiring a z coordinate of the sampling grid vertex according to the three-dimensional structured point cloud;
and the sampling result acquisition module is used for acquiring the resampling single viewpoint depth data according to the sampling result.
According to the global matching method of the multi-view three-dimensional point cloud, the three-dimensional structured point cloud is re-sampled through the sampling grid to obtain the re-sampled single-view depth data; and obtaining a closest point pair list and a normal vector list of the point cloud to be matched by utilizing the reference point cloud of the resampled single viewpoint depth data and the point cloud to be matched, and obtaining a rigid body transformation matrix by utilizing the reference point cloud, the closest point pair list and the normal vector list. According to the method, the sampling grid is used for dividing the resampled single-viewpoint depth data, the searching speed of the closest point of the point cloud to be matched is increased, the global matching speed is further improved, and high-efficiency global matching can be achieved even for multi-viewpoint large-scale point cloud.
Drawings
FIG. 1 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in an embodiment;
FIG. 2 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in yet another embodiment;
FIG. 3 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in another embodiment;
FIG. 4 is a flowchart of a global matching method for multi-view three-dimensional point clouds in an embodiment.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in an embodiment.
In this embodiment, the global matching method for a multi-view three-dimensional point cloud includes:
s100, acquiring multi-viewpoint three-dimensional structured point cloud of the measured object.
The multi-viewpoint point cloud obtained in the acquisition based on the visual method has an original structure of a three-dimensional depth image, and is called a three-dimensional structured point cloud. The distribution of the three-dimensional structured point cloud is similar to the arrangement of image pixels, and each pixel position corresponds to a three-dimensional vertex. The point cloud of a single viewpoint acquired based on a visual method is referred to as single viewpoint depth data, which is by default located in the camera coordinate system. In one embodiment, each pixel location also corresponds to a flag bit for specifying whether the three-dimensional vertex at the location is valid.
And S200, resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single viewpoint depth data.
And in a camera coordinate system, resampling the three-dimensional structured point cloud through a sampling grid to obtain resampling single-viewpoint depth data.
S300, obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched, and obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list.
And (5) setting the reference point cloud and the point cloud to be matched of the resampled single viewpoint depth data by using the resampled single viewpoint depth data obtained in the step (S200), finding the closest point of each point of the point cloud to be matched in the reference point cloud through interpolation, and calculating to obtain the z coordinate of the closest point, thereby obtaining a reference point cloud of the resampled single viewpoint depth data, a closest point pair list and a normal vector list of the point cloud to be matched. And obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list through a certain algorithm.
The global matching method of the multi-view three-dimensional point cloud obtains the three-dimensional structured point cloud of the measured object, utilizes the sampling grid to resample the three-dimensional structured point cloud to obtain the resampling single-view depth data, utilizes the reference point cloud of the resampling single-view depth data and the point cloud to be matched to obtain the closest point pair list and the normal vector list of the point cloud to be matched, and utilizes the reference point cloud, the closest point pair list and the normal vector list to obtain the rigid body transformation matrix. By resampling the three-dimensional structured point cloud and dividing the resampled single-viewpoint depth data by the sampling grid, the searching speed of the closest point of the point cloud to be matched is increased, the global matching speed is further increased, and high-efficiency global matching can be realized even for multi-viewpoint large-scale point cloud.
Referring to fig. 2, fig. 2 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in yet another embodiment.
In this embodiment, the global matching method for a multi-view three-dimensional point cloud includes:
s101, acquiring a multi-viewpoint three-dimensional structured point cloud of a measured object.
And S102, dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid.
And dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system, forming a uniform sampling grid on the xy plane, and taking the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions as sampling ranges. The sampling interval may be used to control the density of the sampling. In one embodiment, due to the simple structure of the sampling grid, the results of the sampling may be stored in the form of a matrix, each matrix element representing a sampling location.
S103, determining the effective sampling grid position of the sampling grid according to four adjacent effective points in the three-dimensional structured point cloud.
Not all sampling positions in the sampling grid are valid, and valid sampling grid positions of the sampling grid can be determined according to four adjacent valid points in the three-dimensional structured point cloud.
And S104, calculating the z coordinate at the vertex of the sampling grid by utilizing bilinear interpolation according to the effective sampling grid position.
And S105, obtaining the resampling single viewpoint depth data according to the sampling result.
And calculating the z coordinate at the vertex of the sampling grid through the xy plane uniform sampling grid and the bilinear interpolation in the camera coordinate system to obtain the three-dimensional data of the depth data of the resampled single viewpoint. In one embodiment, the resampling need only be done once throughout the global matching process.
And S106, obtaining a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched, and obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list.
Obtaining the depth data of the resampled single viewpoint by using the uniform sampling grid, setting the reference point cloud and the point cloud to be matched of the resampled single viewpoint depth data, finding the closest point of each point of the point cloud to be matched in the reference point cloud through interpolation, calculating to obtain the z coordinate of the closest point, and further obtaining the reference point cloud of the resampled single viewpoint depth data, the closest point pair list and the normal vector list of the point cloud to be matched. And obtaining a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list through a certain algorithm.
The global matching method of the multi-view three-dimensional point cloud obtains the three-dimensional structured point cloud of the measured object, resamples the three-dimensional structured point cloud by using a uniform effective sampling grid to obtain resampled single-view depth data, obtains a closest point pair list and a normal vector list of the point cloud to be matched by using the reference point cloud of the resampled single-view depth data and the point cloud to be matched, and obtains a rigid body transformation matrix by using the reference point cloud, the closest point pair list and the normal vector list. And through resampling the three-dimensional structured point cloud, the uniform sampling grid divides the resampling single-viewpoint depth data. According to the method, the effective sampling grid position of the sampling grid is determined according to four adjacent effective points in the three-dimensional structured point cloud, the searching speed of the closest point of the point cloud to be matched is increased, the global matching speed is further increased, and high-efficiency global matching can be realized even for multi-view large-scale point cloud.
Referring to fig. 3, fig. 3 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in yet another embodiment.
In this embodiment, the global matching method for a multi-view three-dimensional point cloud includes:
s201, acquiring a multi-viewpoint three-dimensional structured point cloud of a measured object.
S202, resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single viewpoint depth data.
S203, acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data.
As with the conventional method, the initial value of global matching is obtained by the techniques of mechanical control device, camera calibration and the like. In step S202, one of the resampled single viewpoint depth data is used as a first reference point cloud, and the other is used as a first point cloud to be matched.
And S204, transforming the first point cloud to be matched into a local coordinate system of the first reference point cloud by using initial transformation.
The camera coordinate system is used as a reference coordinate system (local coordinate system), and the first reference point cloud is located in the reference coordinate system. And transforming the first point cloud to be matched into a local coordinate system of the first reference point cloud by using initial transformation. Setting M1 as transformation from the reference point cloud to the global coordinate system, and M1.inverse as inverse transformation of M1; m2 is the transformation from the point cloud to be matched to the global coordinate system, and then the transformation M from the point cloud to be matched to the local coordinate system of the reference point cloud is: m ═ M1. invert × M2.
S205, according to the positions of each point in the first reference point cloud and the first point cloud to be matched in the sampling grid of the first reference point cloud, the closest point position of each point in the first point cloud to be matched in the first reference point cloud is obtained by utilizing bilinear interpolation, and the closest point position comprises a z coordinate.
After the first point cloud to be matched is transformed to the local coordinate system of the first reference point cloud, the position of each point in the first point cloud to be matched in the sampling grid of the first reference point cloud is calculated, and obviously, only the position in the range of the sampling grid of the first reference point cloud is an effective position, so that the points outside the range can be conveniently removed. Taking any point of the first point cloud to be matched as an example, a new point is obtained by using the position of the point in the sampling grid of the first reference point cloud and combining bilinear interpolation as a closest point of the point cloud to be matched, and meanwhile, a normal vector of the new point is obtained by calculation. In this way, the closest point position of each point in the first point cloud to be matched in the first reference point cloud can be obtained, and the closest point position comprises a z coordinate. In one embodiment, the distance threshold distance and the normal vector angle may be used to determine whether the nearest point position is valid.
And S206, transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system.
And after the positions of the closest points of the first point cloud to be matched are all obtained, transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system.
And S207, obtaining a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the conversion result.
And S208, obtaining a first rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched. In one embodiment, the resampled single viewpoint depth data and the corresponding rigid body transforms are stored separately.
According to the global matching method of the multi-view three-dimensional point cloud, the resampling is carried out on the three-dimensional structured point cloud, and the resampling single-view depth data is divided by the uniform sampling grids. The first point cloud to be matched is converted into the local coordinate system of the first reference point cloud, so that the searching speed of the closest point of the point cloud to be matched is increased, the global matching speed is further increased, and high-efficiency global matching can be realized even if the large-scale point cloud with multiple viewpoints is adopted.
Referring to fig. 4, fig. 4 is a flowchart of a global matching method for a multi-view three-dimensional point cloud in an embodiment.
In this embodiment, the global matching method for a multi-view three-dimensional point cloud includes:
s301, acquiring a multi-viewpoint three-dimensional structured point cloud of the measured object.
S302, resampling the three-dimensional structured point cloud of each viewpoint through a sampling grid to obtain resampling single viewpoint depth data.
And S303, taking any one single-viewpoint depth data of the resampled single-viewpoint depth data as a second point cloud to be matched, and taking any other one single-viewpoint depth data of the resampled single-viewpoint depth data as a second reference point cloud.
We consider the point clouds of all viewpoints of the resampled single viewpoint depth data as an integral project, including point clouds from 1 to n. And taking any one of the resampled single-viewpoint depth data as a second point cloud to be matched, and taking any other resampled single-viewpoint depth data as a second reference point cloud. Specifically, the 1 st resampled single-viewpoint depth data is used as a second point cloud to be matched, and all other (2 nd to nth) resampled single-viewpoint depth data are used as second reference point clouds; taking the 2 nd resampled single-viewpoint depth data as a second point cloud to be matched, and taking all other (1 st, 3 rd to nth) resampled single-viewpoint depth data as a second reference point cloud; … …, sequentially carrying out the steps until the nth resampling single viewpoint depth data is used as the second point cloud to be matched, and all other (1 st to n-1 st) resampling single viewpoint depth data are used as the second reference point cloud.
And S304, transforming the second point clouds to be matched into a local coordinate system of each second reference point cloud by using the initial transformation.
The camera coordinate system is used as a reference coordinate system (local coordinate system), and the second reference point cloud is located in the reference coordinate system. And transforming the second point clouds to be matched into a local coordinate system of each second reference point cloud by using initial transformation. Setting M1 as transformation from the reference point cloud to the global coordinate system, and M1.inverse as inverse transformation of M1; m2 is the transformation from the point cloud to be matched to the global coordinate system, and then the transformation M from the point cloud to be matched to the local coordinate system of the reference point cloud is: m ═ M1. invert × M2.
S305, obtaining the nearest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation according to the position of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, wherein the nearest point position comprises a z coordinate.
And after the second point clouds to be matched are transformed to the local coordinate system of each second reference point cloud, calculating the position of each point in the second point clouds to be matched in the sampling grid of each second reference point cloud, wherein obviously, only the position in the range of the sampling grid of the second reference point cloud is an effective position, and the points outside the range can be conveniently removed. Taking any point of the second point cloud to be matched as an example, a new point is obtained by using the position of the point in the sampling grid of the second reference point cloud and combining bilinear interpolation as a closest point of the point cloud to be matched, and meanwhile, a normal vector of the new point is obtained by calculation. Therefore, the closest point position of each point in the second point cloud to be matched in each second reference point cloud can be obtained, and the closest point position comprises a z coordinate. In one embodiment, the distance threshold distance and the normal vector angle may be used to determine whether the nearest point position is valid.
And S306, transforming the positions of the closest points of each second reference point cloud and each second point cloud to be matched into a global coordinate system.
And after the closest point position of each second reference point cloud of the second point cloud to be matched is obtained, transforming the closest point position of each second reference point cloud and the closest point position of the second point cloud to be matched into the global coordinate system.
And S307, obtaining a point pair list and a normal vector list of the nearest point positions of each second reference point cloud and each second point cloud to be matched according to the conversion result.
And obtaining a point pair list and a normal vector list of the closest point positions of each second reference point cloud and the corresponding second point cloud to be matched according to the conversion result in the step S306.
And S308, obtaining a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
From the pair list and normal vector list obtained in step S307, a second rigid body transformation matrix is obtained by the least square method, and the process proceeds to step S309.
S309, judging whether a preset matching condition is met.
If the preset matching condition is met, the global matching is finished, otherwise, the step S303 is returned to continue the process until the preset matching condition is met. Regarding the steps S303 to S308 as an overall matching, in one embodiment, the preset matching condition is the maximum overall matching times. In another embodiment, the preset matching condition is that a matching threshold condition is satisfied, and the threshold condition may be that a distance between the closest point of the second reference point cloud and the second point cloud to be matched satisfies a preset distance, or that a change effect of the second rigid body transformation matrix is sufficiently small.
According to the global matching method of the multi-view three-dimensional point cloud, the three-dimensional structured point cloud is re-sampled, the uniform sampling grid divides the re-sampling single-view depth data, and the searching speed of the closest point of the point cloud to be matched is increased. Through multi-round integral matching, the uniform distribution alignment error among the point clouds of all viewpoints of the depth data of the resampled single viewpoint can be effectively realized, the speed of the overall matching is further improved, and the high-efficiency overall matching can be realized even if the point clouds of a plurality of viewpoints are large-scale.
On the other hand, the invention also provides a global matching system of the multi-view three-dimensional point cloud, which comprises the following steps:
the three-dimensional structured point cloud acquisition module is used for acquiring multi-viewpoint three-dimensional structured point cloud of the measured object;
the resampling single-viewpoint depth data acquisition module is used for resampling the three-dimensional structured point cloud of each viewpoint through the sampling grid to obtain resampling single-viewpoint depth data;
and the rigid body transformation matrix acquisition module is used for obtaining a closest point pair list and a normal vector list of the point cloud to be matched by utilizing the reference point cloud of the resampled single-viewpoint depth data and the point cloud to be matched and obtaining a rigid body transformation matrix by utilizing the reference point cloud, the closest point pair list and the normal vector list.
In one embodiment, the rigid body transformation matrix obtaining module comprises:
the first point cloud to be matched acquisition module is used for acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data;
the first point cloud to be matched is transformed into a local coordinate system of the first reference point cloud by using initial transformation;
the first point cloud closest point acquisition module is used for acquiring the closest point position of each point in the first point cloud to be matched in the first reference point cloud by utilizing bilinear interpolation according to the first reference point cloud and the position of each point in the first point cloud to be matched in the sampling grid of the first reference point cloud, wherein the closest point position comprises a z coordinate;
the first reference point cloud transformation module is used for transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system;
the first reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the conversion result;
and the first rigid body transformation matrix acquisition module is used for acquiring a first rigid body transformation matrix by using a least square method according to a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched.
In one embodiment, the rigid body transformation matrix obtaining module further comprises:
the second point cloud to be matched acquisition module is used for taking any one single-viewpoint depth data of the resampled single-viewpoint depth data as a second point cloud to be matched, and taking any other one single-viewpoint depth data of the resampled single-viewpoint depth data as a second reference point cloud;
the second point cloud to be matched is transformed to a local coordinate system of each second reference point cloud by using the initial transformation;
the second point cloud closest point acquisition module is used for obtaining the closest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation according to the position of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, and the closest point position comprises a z coordinate;
the second reference point cloud conversion module is used for converting the closest point position of each second reference point cloud and the second point cloud to be matched into a global coordinate system;
the second reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point position of each second reference point cloud and the second point cloud to be matched according to the conversion result;
and the second rigid body transformation matrix acquisition module is used for acquiring a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
In one embodiment, the resampled single viewpoint depth data acquisition module comprises:
the system comprises a dividing and sampling grid module, a sampling grid module and a sampling grid module, wherein the dividing and sampling grid module is used for dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid, and the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions are used as sampling ranges;
the sampling grid vertex z coordinate acquisition module is used for acquiring a z coordinate of the sampling grid vertex according to the three-dimensional structured point cloud;
and the sampling result acquisition module is used for acquiring the resampling single viewpoint depth data according to the sampling result.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A global matching method of multi-view three-dimensional point cloud is characterized by comprising the following steps:
acquiring a multi-viewpoint three-dimensional structured point cloud of a measured object;
in a camera coordinate system, resampling the three-dimensional structured point cloud of each viewpoint through a uniform sampling grid to obtain resampling single viewpoint depth data;
acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data;
transforming the first point cloud to be matched into the local coordinate system of the first reference point cloud by using the camera coordinate system as a local coordinate system and by using initial transformation;
according to the positions of each point in the first reference point cloud and the first point cloud to be matched in the sampling grid of the first reference point cloud, obtaining the closest point position of each point in the first point cloud to be matched in the first reference point cloud by utilizing bilinear interpolation, wherein the closest point position comprises a z coordinate;
transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system;
obtaining a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the transformation result;
obtaining a first rigid body transformation matrix by using a least square method according to a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched;
wherein, a camera coordinate system is used as a local coordinate system, M1 is used as transformation from a reference point cloud to a global coordinate system, and M1.inverse is used as inverse transformation of M1; m2 is the transformation from the point cloud to be matched to the global coordinate system, and then the transformation M from the point cloud to be matched to the local coordinate system of the reference point cloud is: m ═ M1. invert × M2.
2. The global matching method for multi-view three-dimensional point clouds according to claim 1, wherein the step of resampling the three-dimensional structured point cloud of each view to obtain resampled single-view depth data through a sampling grid comprises:
dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid, and taking the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions as sampling ranges;
obtaining a z coordinate of the sampling grid vertex according to the three-dimensional structured point cloud;
and obtaining the resampling single viewpoint depth data according to the sampling result.
3. The global matching method for multi-view three-dimensional point clouds according to claim 2, wherein the step of obtaining the z-coordinate at the vertex of the sampling grid from the three-dimensional structured point cloud is specifically:
determining an effective sampling grid position of the sampling grid according to four adjacent effective points in the three-dimensional structured point cloud;
and calculating the z coordinate at the vertex of the sampling grid by utilizing bilinear interpolation according to the effective sampling grid position.
4. The global matching method for multi-view three-dimensional point cloud according to claim 1, further comprising:
any one single-viewpoint depth data of the resampled single-viewpoint depth data is used as a second point cloud to be matched, and any other one single-viewpoint depth data of the resampled single-viewpoint depth data is used as a second reference point cloud;
transforming the second point clouds to be matched into a local coordinate system of each second reference point cloud by using initial transformation;
according to the positions of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, obtaining the closest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation, wherein the closest point position comprises a z coordinate;
transforming the closest point position of each second reference point cloud and the second point cloud to be matched into a global coordinate system;
obtaining a point pair list and a normal vector list of the nearest point position of each second reference point cloud and the second point cloud to be matched according to the transformation result;
and obtaining a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
5. The global matching method for multi-viewpoint three-dimensional point clouds according to claim 4, wherein the step of obtaining the second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list further includes a step of judging whether a preset matching condition is met, if not, the step of returning any one single viewpoint depth data of the resampled single viewpoint depth data as the second point cloud to be matched, and the step of taking any other one single viewpoint depth data of the resampled single viewpoint depth data as the second reference point cloud is continued until the preset matching condition is met.
6. The global matching method for multi-view three-dimensional point clouds according to claim 5, wherein the preset matching condition is that a matching threshold condition is satisfied, and the threshold condition is that a distance between the closest points of the second reference point cloud and the second point cloud to be matched satisfies a preset distance.
7. The global matching method for multi-view three-dimensional point cloud according to any one of claims 1 to 6, wherein the initial value of global matching is obtained by a mechanical control device or a camera calibration technique.
8. A global matching system for multi-view three-dimensional point clouds, comprising:
the three-dimensional structured point cloud acquisition module is used for acquiring multi-viewpoint three-dimensional structured point cloud of the measured object;
the resampling single-viewpoint depth data acquisition module is used for resampling the three-dimensional structured point cloud of each viewpoint in a camera coordinate system through a uniform sampling grid to obtain resampling single-viewpoint depth data;
the first point cloud to be matched acquisition module is used for acquiring a first reference point cloud and a first point cloud to be matched of the resampled single viewpoint depth data;
the first point cloud to be matched is transformed into the local coordinate system of the first reference point cloud by using the camera coordinate system as the local coordinate system and by using initial transformation;
the first point cloud closest point acquisition module is used for obtaining the closest point position of each point in the first point cloud to be matched in the first reference point cloud by utilizing bilinear interpolation according to the first reference point cloud and the position of each point in the first point cloud to be matched in the sampling grid of the first reference point cloud, wherein the closest point position comprises a z coordinate;
the first reference point cloud transformation module is used for transforming the positions of the closest points of the first reference point cloud and the first point cloud to be matched into a global coordinate system;
the first reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched according to the transformation result;
the first rigid body transformation matrix acquisition module is used for acquiring a first rigid body transformation matrix by using a least square method according to a point pair list and a normal vector list of the closest point positions of the first reference point cloud and the first point cloud to be matched;
wherein, a camera coordinate system is used as a local coordinate system, M1 is used as transformation from a reference point cloud to a global coordinate system, and M1.inverse is used as inverse transformation of M1; m2 is the transformation from the point cloud to be matched to the global coordinate system, and then the transformation M from the point cloud to be matched to the local coordinate system of the reference point cloud is: m ═ M1. invert × M2.
9. The global matching system for multi-view three-dimensional point clouds of claim 8, further comprising:
the second point cloud to be matched acquisition module is used for taking any one single viewpoint depth data of the resampled single viewpoint depth data as a second point cloud to be matched, and taking any other one single viewpoint depth data of the resampled single viewpoint depth data as a second reference point cloud;
the second point cloud to be matched is transformed to a local coordinate system of each second reference point cloud by using the initial transformation;
the second point cloud closest point acquisition module is used for obtaining the closest point position of each point in the second point cloud to be matched in each second reference point cloud by utilizing bilinear interpolation according to the position of each second reference point cloud and each point in the second point cloud to be matched in the sampling grid of the second reference point cloud, and the closest point position comprises a z coordinate;
the second reference point cloud conversion module is used for converting the closest point position of each second reference point cloud and the second point cloud to be matched into a global coordinate system;
the second reference point cloud list acquisition module is used for acquiring a point pair list and a normal vector list of the closest point position of each second reference point cloud and the second point cloud to be matched according to the transformation result;
and the second rigid body transformation matrix acquisition module is used for acquiring a second rigid body transformation matrix by using a least square method according to the point pair list and the normal vector list.
10. The global matching system for multi-view three-dimensional point clouds of claim 8, wherein the resampling single-view depth data acquisition module comprises:
the sampling grid dividing module is used for dividing the xy plane of the three-dimensional structured point cloud at equal intervals by using a camera coordinate system as a reference coordinate system to form a uniform sampling grid, and taking the maximum and minimum coordinates of the xy plane of the three-dimensional structured point cloud in the x and y directions as sampling ranges;
the sampling grid vertex z coordinate acquisition module is used for acquiring a z coordinate of the sampling grid vertex according to the three-dimensional structured point cloud;
and the sampling result acquisition module is used for acquiring the resampling single viewpoint depth data according to the sampling result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710660439.0A CN107507127B (en) | 2017-08-04 | 2017-08-04 | Global matching method and system for multi-viewpoint three-dimensional point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710660439.0A CN107507127B (en) | 2017-08-04 | 2017-08-04 | Global matching method and system for multi-viewpoint three-dimensional point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107507127A CN107507127A (en) | 2017-12-22 |
CN107507127B true CN107507127B (en) | 2021-01-22 |
Family
ID=60690422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710660439.0A Active CN107507127B (en) | 2017-08-04 | 2017-08-04 | Global matching method and system for multi-viewpoint three-dimensional point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107507127B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108053481B (en) * | 2017-12-26 | 2021-11-30 | 深圳市易尚展示股份有限公司 | Method and device for generating three-dimensional point cloud normal vector and storage medium |
CN108346165B (en) * | 2018-01-30 | 2020-10-30 | 深圳市易尚展示股份有限公司 | Robot and three-dimensional sensing assembly combined calibration method and device |
CN110555085B (en) * | 2018-03-29 | 2022-01-14 | 中国石油化工股份有限公司 | Three-dimensional model loading method and device |
KR102068993B1 (en) * | 2018-05-24 | 2020-01-22 | 주식회사 이누씨 | Method And Apparatus Creating for Avatar by using Multi-view Image Matching |
WO2020017668A1 (en) * | 2018-07-16 | 2020-01-23 | 주식회사 이누씨 | Method and apparatus for generating avatar by using multi-view image matching |
CN109493375B (en) * | 2018-10-24 | 2021-01-12 | 深圳市易尚展示股份有限公司 | Data matching and merging method and device for three-dimensional point cloud and readable medium |
CN115222787B (en) * | 2022-09-20 | 2023-01-10 | 宜科(天津)电子有限公司 | Real-time point cloud registration method based on hybrid retrieval |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127123A (en) * | 2007-09-11 | 2008-02-20 | 东南大学 | Sign point hole filling method based on neural network in tri-D scanning point cloud |
CN103093498A (en) * | 2013-01-25 | 2013-05-08 | 西南交通大学 | Three-dimensional human face automatic standardization method |
CN104217458A (en) * | 2014-08-22 | 2014-12-17 | 长沙中科院文化创意与科技产业研究院 | Quick registration method for three-dimensional point clouds |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
-
2017
- 2017-08-04 CN CN201710660439.0A patent/CN107507127B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127123A (en) * | 2007-09-11 | 2008-02-20 | 东南大学 | Sign point hole filling method based on neural network in tri-D scanning point cloud |
CN103093498A (en) * | 2013-01-25 | 2013-05-08 | 西南交通大学 | Three-dimensional human face automatic standardization method |
CN104217458A (en) * | 2014-08-22 | 2014-12-17 | 长沙中科院文化创意与科技产业研究院 | Quick registration method for three-dimensional point clouds |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
Also Published As
Publication number | Publication date |
---|---|
CN107507127A (en) | 2017-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107507127B (en) | Global matching method and system for multi-viewpoint three-dimensional point cloud | |
CN111126148B (en) | DSM (digital communication system) generation method based on video satellite images | |
Lee et al. | Skeleton-based 3D reconstruction of as-built pipelines from laser-scan data | |
CN113516769B (en) | Virtual reality three-dimensional scene loading and rendering method and device and terminal equipment | |
US8089480B2 (en) | Method for meshing a curved surface | |
EP3454303B1 (en) | Method and device for filling regions of terrain elevation model data | |
CN104616345A (en) | Octree forest compression based three-dimensional voxel access method | |
JP2008140391A (en) | Terrain modeling based on curved surface area | |
CN111581776B (en) | Iso-geometric analysis method based on geometric reconstruction model | |
CN110084740B (en) | Spherical image generation and conversion method based on normalized structure | |
CA2684893A1 (en) | Geospatial modeling system providing data thinning of geospatial data points and related methods | |
CN115861527A (en) | Method and device for constructing live-action three-dimensional model, electronic equipment and storage medium | |
JP5483076B2 (en) | Crustal movement tracking system and crustal movement tracking method | |
CN115049794A (en) | Method and system for generating dense global point cloud picture through deep completion | |
CN113920240A (en) | Three-dimensional imaging method for laboratory temperature field | |
KR101425425B1 (en) | Apparatus for gridding radar data and method thereof | |
CN114219921A (en) | Suspension surface modeling method based on visual point and RBF interpolation | |
CN107845136B (en) | Terrain flattening method for three-dimensional scene | |
CN105894575A (en) | Three-dimensional modeling method and device of road | |
Lv et al. | The application of a complex composite fractal interpolation algorithm in the seabed terrain simulation | |
Liu et al. | Line simplification algorithm implementation and error analysis | |
Sundlie et al. | Integer computation of image orthorectification for high speed throughput | |
CN116229005B (en) | Geodesic determining method and device for three-dimensional roadway model | |
CN109448539B (en) | Multi-beam tone scale map drawing method based on QT framework | |
CN116168149A (en) | Riverbed three-dimensional model generation method based on multi-source heterogeneous data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |