CN110956700A - Density regulation and control method for generating point cloud based on motion recovery structure - Google Patents

Density regulation and control method for generating point cloud based on motion recovery structure Download PDF

Info

Publication number
CN110956700A
CN110956700A CN201911222767.8A CN201911222767A CN110956700A CN 110956700 A CN110956700 A CN 110956700A CN 201911222767 A CN201911222767 A CN 201911222767A CN 110956700 A CN110956700 A CN 110956700A
Authority
CN
China
Prior art keywords
image
point
point cloud
density
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911222767.8A
Other languages
Chinese (zh)
Other versions
CN110956700B (en
Inventor
李自胜
蒙浩
肖晓萍
胡朝海
王露明
杨侨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN201911222767.8A priority Critical patent/CN110956700B/en
Publication of CN110956700A publication Critical patent/CN110956700A/en
Application granted granted Critical
Publication of CN110956700B publication Critical patent/CN110956700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The invention provides a density regulation and control method for generating point cloud based on a motion recovery structure, which comprises the following steps: generating point clouds from the sequence images, comparing the densities of the generated point clouds and the required target point clouds to determine the point cloud density regulation and control direction to be the point cloud density reduction or point cloud density increase, wherein the sequence images comprise a first image and a second image, and the first image and the second image are used for restraining the positions of the target points; extracting first image feature points, setting the size of a cell according to the spatial mapping relation of the first image feature points and the first image generation point cloud, and carrying out mesh division on the first image feature point area according to the set cell size; and regulating the density of the point cloud to reduce or increase the density of the point cloud. The method of the invention is based on a motion recovery structure method to carry out density regulation and control on the point cloud generated by two images, thereby realizing the purposes of reducing the density of the point cloud and simplifying the point cloud data; the point cloud density can be increased, so that the detail information is supplemented, and a better regulation and control effect can be obtained.

Description

Density regulation and control method for generating point cloud based on motion recovery structure
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a density control method for generating point cloud based on a motion recovery structure.
Background
The point cloud density matching is the basis of different point cloud source data fusion and has important effects on the aspects of point cloud data registration, hole repair and the like. In the existing point cloud density regulation and control method, the point cloud density regulation and control is poor due to the influence of noise. Also, there are few studies in the prior art to increase the density of the point cloud.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to address one or more of the problems in the prior art as set forth above. For example, one of the objectives of the present invention is to provide a control method based on a motion recovery structure method and capable of reducing or increasing the density of a point cloud.
In order to achieve the above object, an aspect of the present invention provides a density control method for generating a point cloud in a motion recovery structure, which may include the steps of: generating point clouds from a sequence image, comparing the densities of the generated point clouds and a required target point cloud to determine the point cloud density regulation and control direction to be the point cloud density reduction or point cloud density increase, wherein the sequence image comprises a first image and a second image, and the first image and the second image are used for restraining the position of a target point; extracting first image feature points, setting the cell size according to the spatial mapping relation of the first image feature points and the first image generation point cloud, and carrying out grid division on a first image feature point area according to the set cell size, wherein the first image feature point area comprises a plurality of cells; modulating the point cloud density to achieve a point cloud density reduction or increase, wherein achieving a point cloud density reduction may comprise: respectively calculating the distance between the first image feature point in each cell and the center point of the cell to which the first image feature point belongs, selecting the nearest feature point which is closest to the center point of the cell, and representing all the feature points of the cell to which the first image feature point belongs by using the nearest feature point to realize the reduction of the density of the feature points; acquiring points matched with the nearest characteristic points in each cell in the second image, wherein the nearest characteristic points in each cell of the first image and the points matched with the nearest characteristic points in the second image form a first matching point pair; generating an image point cloud according to the first matching point pair; achieving the point cloud density increase may include: respectively carrying out curve fitting on the characteristic points in each cell, and carrying out interpolation processing on the fitted curves to realize the increase of the density of the characteristic points; acquiring points which are matched with the feature points obtained after interpolation processing is carried out on each cell in the second image, wherein the feature points obtained after interpolation processing is carried out on each cell and the matching points in the second image form second matching point pairs; and generating an image point cloud according to the second matching point pair.
In an exemplary embodiment of the method for regulating and controlling density of point cloud generation based on motion restoration structure of the present invention, the achieving of point cloud density reduction may further include adjusting the set cell size to achieve point cloud density reduction.
In an exemplary embodiment of the method for controlling density of a point cloud generated based on a motion recovery structure, the curve fitting includes a least square method-based curve fitting of feature points in each cell.
In an exemplary embodiment of the density control method for generating a point cloud based on a motion restoration structure according to the present invention, the obtaining a point in the second image that matches a nearest feature point in each cell may include obtaining a point in the second image that matches a nearest feature point of each cell using a KLT feature point tracking algorithm.
In an exemplary embodiment of the density control method for generating a point cloud based on a motion restoration structure, the obtaining a point in the second image that matches a feature point obtained after interpolation processing in each cell may include obtaining a point in the second image that matches a feature point obtained after interpolation processing in each cell by using a KLT feature point tracking algorithm.
In an exemplary embodiment of the density control method for generating a point cloud based on a motion recovery structure, the first image feature point region may be a region surrounded by the maximum value and the minimum value of the feature point coordinates in the X direction and the Y direction in the two-dimensional coordinate system.
Another aspect of the present invention provides a three-dimensional reconstruction method, which includes performing three-dimensional reconstruction after the density of the point cloud is controlled by the density control method for generating the point cloud based on the motion recovery structure as described above.
Compared with the prior art, the invention has the beneficial effects that: the regulation and control method starts from the density regulation and control of the two-dimensional characteristic points, and performs density regulation and control on the point cloud generated by the two images based on a motion recovery structure method, so that the density of the point cloud can be reduced, and the point cloud data can be simplified; the density of the point cloud can be increased, so that the detail information is supplemented; the invention can control the image to generate the point cloud density by adjusting the number of the characteristic points in the cell, the size of the cell or the interpolation step length, and can obtain better regulation and control effect.
Drawings
The above and other objects and features of the present invention will become more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram illustrating a cell dividing a first image feature point region according to an exemplary embodiment of the invention;
FIG. 2 illustrates a first image feature point density reduction diagram in accordance with an exemplary embodiment of the present invention;
FIG. 3 illustrates a first image feature point density increase schematic in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram of an Herz-Jesu image artwork and density adjusted according to an exemplary embodiment of the invention;
FIG. 5 is a schematic diagram of an original image and a density adjusted and controlled image of Mechanical part 1 according to an exemplary embodiment of the invention;
fig. 6 shows a schematic diagram of the Mechanical part 2 image original and the density adjusted according to an exemplary embodiment of the invention.
Detailed Description
Hereinafter, a density control method for generating a point cloud based on a motion restoration structure according to the present invention will be described in detail with reference to the accompanying drawings and exemplary embodiments.
Fig. 1 is a schematic diagram illustrating a cell dividing a first image feature point region according to an exemplary embodiment of the present invention. FIG. 2 illustrates a first image feature point density reduction diagram in accordance with an exemplary embodiment of the present invention; fig. 2(a) is a schematic diagram before the first image feature point is adjusted and controlled; fig. 2(b) is a schematic diagram of the first image after the feature points are reduced. FIG. 3 illustrates a first image feature point density increase schematic in accordance with an exemplary embodiment of the present invention; fig. 3(a) is a schematic diagram before the first image feature point is adjusted and controlled; fig. 3(b) is a schematic diagram of the first image after the feature points are increased. FIG. 4 is a schematic diagram of an Herz-Jesu image artwork and density adjusted according to an exemplary embodiment of the invention; wherein, FIG. 4(a) is the original image of Herz-Jesu image; FIG. 4(b) is a reconstructed image without density adjustment; FIGS. 4(c) and 4(d) are reconstructed images with cell sides of 5 and 15, respectively; fig. 4(e) and 4(f) are reconstructed images with interpolation step values of 1 and 3, respectively, for increasing density of the corresponding point cloud. FIG. 5 is a schematic diagram of an original image and a density adjusted and controlled image of Mechanical part 1 according to an exemplary embodiment of the invention; wherein, FIG. 5(a) is the original image of the Mechanical part 1 image; FIG. 5(b) is a reconstructed image without density control; FIGS. 5(c) and 5(d) are reconstructed images with cell sides of 5 and 15, respectively; fig. 5(e) and 5(f) are reconstructed images with interpolation step values of 2 and 4, respectively, for increasing density of the corresponding point cloud. FIG. 6 is a schematic diagram of an original image and a density adjusted image of Mechanical part 2 according to an exemplary embodiment of the invention; wherein, FIG. 6(a) is the original image of the Mechanical part 2 image; FIG. 6(b) is a reconstructed image without density control; FIGS. 6(c) and 6(d) are reconstructed images with cell sides of 5 and 15, respectively; fig. 6(e) and 6(f) are reconstructed images with interpolation step values of 2 and 4, respectively, for increasing density of the corresponding point cloud.
Specifically, the regulating method of the invention utilizes two images to constrain the position of a target point, firstly generates a point cloud from a sequence image, and compares the density of the generated point cloud and the density of the target point cloud to determine the regulating direction; and secondly, extracting image feature points, setting the sizes of the cells according to the spatial mapping relation between the image feature points and the corresponding generated point cloud, uniformly meshing the image feature point areas, taking the feature points closest to the centers of the cells as the feature points in the cells, realizing density reduction, and interpolating the feature points in the cells to realize density increase. Then, a matching point can be obtained by adopting a KLT characteristic point tracking algorithm; and finally, generating an image point cloud according to the matching point pairs. In the invention, the density of the two-dimensional characteristic points can be described by the average distance of the characteristic points in the plane, and the density of the point cloud can be described by the average distance of the point cloud in the space.
One aspect of the invention provides a density control method for generating a point cloud based on a motion recovery structure. In an exemplary embodiment of the method for controlling density of a point cloud generated based on a motion recovery structure of the present invention, the method may include:
and S01, generating point clouds from the sequence images, and comparing the densities of the generated point clouds and the required target point clouds to determine whether the point cloud density regulating and controlling direction is to reduce the point cloud density or increase the point cloud density.
In the method, the point cloud is generated from the sequence image based on the motion recovery structure method. When recovering the three-dimensional information of the object from the two-dimensional image, since the constraint of one image can only obtain the specific position of the target point on one outward ray with the optical center as the starting point, the sequence image of the invention can comprise a first image and a second image, and the position of the target point is constrained by using the first image and the second image.
And S02, extracting the first image feature points, setting the cell size according to the space mapping relation between the first image feature points and the first image generation point cloud, and performing mesh division on the first image feature point area according to the set cell size.
First the side length of the cell is determined. The cells of the present invention may be square. And after the side length of the cell is determined, carrying out uniform grid division on the maximum coordinate value and the minimum coordinate value according to the two-dimensional characteristic point coordinate extracted from the first image. For example, as shown in the schematic view of the divided grid of fig. 1, black dots in the figure represent first image feature points. Points A, B in the figure are points where the y coordinate value is maximum and minimum, respectively, and point C, D is a point where the x coordinate value is minimum and maximum, respectively. The area according to points A, B, C and the D-city is taken as the first image feature point area. It should be noted here that if the first image feature point region cannot be divided by an integer number of cells in the region, the remaining region after being divided by the integer number of cells may be divided by setting cells again. For example, for a feature point region having a side length of 10cm, each cell having a side length of 3cm, 4 cells may be set for the region.
And S03, regulating the point cloud density to realize the reduction or increase of the point cloud density.
In the above, the adjusting and controlling the point cloud density to achieve the point cloud density reduction may include:
s100, the distance between the first image feature point contained in each cell and the center of the cell to which the image belongs is calculated, and all the feature points in the cell to which the image belongs are represented by the feature point closest to the center of the cell to which the image belongs (closest distance feature point). For example, as shown in fig. 2, 8 first image feature points (indicated by black dots in the figure) are included in one cell in fig. 2 (a). Comparing the distance between each first image feature point and the center of the cell, representing all the points in the cell by the nearest feature point (point a in the figure), and finally leaving one feature point in the cell, as shown in fig. 2 (b). Here, for a special case, if there are two or more points where the first image feature point is the same distance from the cell center point and both are the closest distances from the center point, a plurality of feature points with the same distance may be retained or only one point may be taken.
S101, obtaining points matched with the feature points with the shortest distance from the center point of each cell in the second image, wherein the feature points with the shortest distance from the center point of each cell in the first image and the matched points in the second image form a first matched point pair. The first matching point pair is formed by a plurality of point pairs, and each point pair is formed by matching a nearest characteristic point in a cell with a second image. For example, a KLT (Kanade-Lucas-Tomasi Tracking) feature point Tracking algorithm may be used to acquire points in the second image that match feature points in the respective cells of S100.
And S102, generating an image point cloud according to the first matching point pair.
Achieving an increase in point cloud density may include the steps of:
and S200, respectively performing curve fitting on the characteristic points in each cell, and performing interpolation processing on the fitted curves to increase the density of the characteristic points. For example, least-squares-based curve fitting is performed on two-dimensional feature points in each cell, and interpolation processing is performed on the fitted curve to achieve density increase. Furthermore, the step length of the interpolation can be estimated according to the side length of the cell, i.e. the maximum value of the interpolation step length can be smaller than the side length of the cell, and can also be a set value or a given value. For example, as shown in fig. 3, in fig. 3(a), one cell includes 8 first image feature points (indicated by black dots in the figure), and after interpolation processing is performed on the fitted curve, 4 first image feature points are added, as shown by gray dots in fig. 3 (b).
S201, obtaining points which are matched with the feature points obtained after interpolation processing is carried out in each unit cell in the second image, wherein the feature points obtained after interpolation processing in each unit cell and the matching points in the second image form second matching point pairs. The second matching point pair is formed by a plurality of point pairs, and each point pair is formed by matching a characteristic point obtained after interpolation processing with the second image. For example, the matching points in the second image may be acquired using a KLT (Kanade-Lucas-Tomasi Tracking) feature point Tracking algorithm.
And S202, generating an image point cloud according to the second matching point pair.
In the present embodiment, the first image feature point region is divided by the set cells. In the process of reducing the point cloud density, the side length of the cell affects the number of the feature points of the first image left at last in the regulation and control process, so that the set cell size can be adjusted to reduce the point cloud density, and the larger the side length set by the cell is, the smaller the point cloud density will be.
In this embodiment, the feature point of the first image may be a strong corner point. The extracting of the first image feature point may be to extract a first image strong corner point. The extracting of the strong corner point of the first image may include:
s300, filtering all pixels of the first image by using horizontal and vertical difference operators respectively to obtain IxAnd IyAnd from said IxAnd IyObtaining a 2 x 2 order matrix, wherein, the IxAnd IyThe partial derivatives of the gray scale image in the x and y directions respectively, and the 2 × 2 matrix is:
Figure BDA0002301306790000061
s301, performing Gaussian smoothing filtering on the obtained 2 x 2 order matrix to obtain a matrix M, wherein,
Figure BDA0002301306790000062
s302, solving an eigenvalue lambda of the matrix M1、λ2Setting a threshold T of the number of extracted strong angular pointsCThreshold value T of distance between adjacent strong angular pointsDJudging whether the pixel point (x, y) is a strong angular point, wherein if lambda is1≥λ2And lambda2≥TCOr λ1≥λ2And lambda2≥TDThen, the pixel point (x, y) is determined as the strong angular point, where (x, y) is the coordinate in the two-dimensional coordinate system, and the threshold value TCAnd a threshold value TDIs a given value. Further, the threshold value TCCan take TC=kλ2maxThreshold value TDCan take TD=kλ2max,λ2maxIs shown in the figureAnd (4) the maximum value in the small characteristic values of the image pixel points. Further, the number of corner points extracted in the first image and the second image is the same, and the distance between adjacent corner points in the same image is greater than a given threshold TD
In particular, the extraction of the strong corner point may be substantially the detection of the strong corner point. The corner detection is a local feature point detection, where the first derivative of the gray image is locally maximum, and the gray level of the image changes in both horizontal and vertical directions. Let the value of the grayscale image at point (x, y) be I (x, y), create an n × n window O centered on this point, translate the image window [ u, v ] to produce a grayscale change E (u, v):
Figure BDA0002301306790000063
for a local small movement amount [ u, v ], Taylor expansion is carried out on I (x + u, y + v) and second order and above terms are ignored, and the formula (2) is carried out
Figure BDA0002301306790000071
Wherein IxAnd IyPartial derivatives of the gray scale image in x and y directions respectively; w (x, y) is a specific gaussian filter, and formula (3) is written in a matrix form:
Figure BDA0002301306790000072
where M is a 2 x 2 order matrix,
Figure BDA0002301306790000073
e may approximate a local cross-correlation function, M describes the shape at this point, and the matrix M is used to measure whether a point (x, y) is a corner point. Let two eigenvalues of the matrix M be λ1、λ2If the smaller of the two characteristic values of M is greater than a given threshold value, i.e. λ1≥λ2And λ2≥kλ2maxA strong corner is obtained.
In this embodiment, the unit cell may be a square unit cell. Of course, the shape of the cell of the present invention is not limited to, for example, a rectangular cell or a circular cell, and may be set according to the first image feature point region shape. For example, the first image feature point region may be set to be rectangular, and the cell may also be set to be rectangular. When the cell is a square, calculating the side length of the square cell may include:
the camera intrinsic parameters describe the internal structure of the camera, which is defined by a 3 × 3 matrix K:
Figure BDA0002301306790000074
wherein: a isuAnd avScaling factors for the image in the x and y directions, respectively, s is the gradient, (u)0,v0) Is the position on the image plane that intersects the optical axis.
According to the pinhole camera model, the coordinate points in the camera coordinate system and the corresponding points in the image coordinate system have the following conversion relationship:
Figure BDA0002301306790000075
one point (X) in the camera coordinate systemC,YC,ZC) The imaging point on the imaging plane is (x, y).
The conversion relationship between the image coordinate system and the pixel coordinate system is as follows:
Figure BDA0002301306790000081
wherein, (u, v) is a certain pixel point in the pixel coordinate system, and dx and dy are the physical sizes of the pixels in the x direction and the y direction respectively.
Combining formula (2) with formula (3) can result in:
Figure BDA0002301306790000082
f is the property of the camera itself and is the intrinsic parameter of the camera. (u)1,v1) And (u)2,v2) Respectively, are two certain pixels in the pixel coordinate system. Due to the point (X) in the camera coordinatesC,YC,ZC) In contrast, (u) can be obtained according to the formula (9)1,v1) And (u)2,v2). Finding a point (u)1,v1) And (u)2,v2) The distance between the two cells can be used as the side length of the cell.
The invention provides a three-dimensional reconstruction method based on a motion recovery structure method. The three-dimensional reconstruction method comprises the above-mentioned density regulation and control method for generating the point cloud based on the motion recovery structure, and the density regulation and control method is used for carrying out three-dimensional reconstruction on the point cloud.
In order that the above-described exemplary embodiments of the invention may be better understood, further description thereof with reference to specific examples is provided below.
Example 1
Under the Windows 7 environment of Intel Core i7 CPU, 4GB RAM and 64-bit operating system, MATLABR2016a is used for density control of Herz-Jesu images and Mechanical part 1 images and Mechanical part 2 images shot by IMX376 sensors (built in a mobile phone).
1. Herz-Jesu image
Fig. 4 shows an Herz-Jesu image and its reconstruction result, the resolution of the Herz-Jesu image is 3072 × 2048, the image (a) only lists one of the original images, the image (b) shows the reconstruction result when the density control is not performed, the images (c) and (d) respectively show the reconstruction results when the cell side length is 5 and 15(oc _ dis is 5 and 15) when the point cloud density is decreased by using the control method of the present invention, and the images (e) and (f) respectively show the reconstruction results when the interpolation step length value is 1 and 3 when the point cloud density is increased by using the control method of the present invention. In the enlarged partial view, comparing the graphs (e) and (f) with the graph (b) shows that the point cloud density is significantly increased. Tables 1 and 2 list the number of reconstructed point clouds and the average spacing of the point clouds when some of the point cloud densities are adjusted. As can be seen from the results in fig. 4 and table 1: the larger the set oc _ dis (cell side length), the smaller the density of the reconstructed image point cloud; as can be seen from the results in fig. 4 and table 2: the smaller the step value of the interpolation is, the greater the density of the reconstructed image point cloud is.
TABLE 1 Herz-Jesu image Point cloud Density reduction
oc_dis - 5 10 15
Number of point clouds 51687 37227 21617 11393
Average distance of point cloud 1.9793×10-4 2.7002×10-4 5.2057×10-4 9.4390×10-4
Note: -means no density control was performed
TABLE 2 Herz-Jesu image Point cloud Density increase
Interpolation step value 1 2 3
Number of point clouds 109313 87510 80957
Average distance of point cloud 7.4403×10-5 1.0089×10-4 1.0298×10-4
2. Mechanical part 1 image
Selecting a Zhangyingyou camera calibration method to calibrate the IMX376 sensor to obtain the internal parameters of the camera: a isu=3491.93,s=0,av=3493.764222,u0=2251.665491,v01735.773362 wherein auAnd avScaling factors for the image in the x and y directions, respectively, s is the gradient, (u)0,v0) Is the position on the image plane that intersects the optical axis. FIG. 5 shows the reconstruction result of Mechanical part 1, the resolution of the Mechanical part 1 image is 4608 × 3456, the image (a) only lists one of the original images, the image (b) shows the reconstruction result without density control, the images (c) and (d) respectively use the control method of the present invention to correspond to the reconstruction results of oc _ dis of 5 and 15 when the density is reduced, and the images (e) and (f) respectively use the present inventionThe clear regulation and control method corresponds to the reconstruction results with interpolation step values of 2 and 4 when the density is increased. Comparing graphs (e), (f) with graph (b) shows a significant increase in point cloud density. Tables 3 and 4 list the number of reconstructed point clouds and the average spacing of the point clouds when some of the point cloud densities are adjusted. As can be seen from the results in fig. 5 and table 3: the larger the set oc _ dis is, the smaller the density of the reconstructed image point cloud is; from the results in fig. 5 and table 4, it can be seen that: the smaller the step value of the interpolation is, the greater the density of the reconstructed image point cloud is.
TABLE 3 Mechanical part 1 image Point cloud Density reduction
oc_dis - 5 10 15
Number of point clouds 64859 46835 28121 14559
Average distance of point cloud 9.8295×10-4 1.4114×10-3 2.2402×10-3 4.7124×10-3
Note: -means no density control was performed
TABLE 4 Mechanical part 1 image Point cloud Density increase
Figure BDA0002301306790000091
Figure BDA0002301306790000101
3. Mechanical part 2 image
Fig. 6 shows the reconstruction result of Mechanical part 2, the resolution of the Mechanical part 2 image is 4608 × 3456, fig. (a) only lists one of the original images, fig. (b) shows the reconstruction result when the density is not controlled, fig. (c) and (d) respectively correspond to the reconstruction results of oc _ dis of 5 and 15 when the density is decreased by using the control method of the present invention, and fig. (e) and (f) respectively correspond to the reconstruction results of interpolation step values of 2 and 4 when the density is increased by using the control method of the present invention. Comparing graphs (e), (f) with graph (b) shows a significant increase in point cloud density. Tables 5 and 6 list the number of reconstructed point clouds and the average spacing of the point clouds, as part of the control of the density of the point clouds. As can be seen from the results in fig. 6 and table 5: the larger the set oc _ dis is, the smaller the density of the reconstructed image point cloud is; as can be seen from the results in fig. 6 and table 6: the smaller the step value of the interpolation is, the greater the density of the reconstructed image point cloud is.
TABLE 5 Mechanical part 2 image Point cloud Density reduction
oc_dis - 10 15 20
Number of point clouds 31888 15149 8647 5412
Average distance of point cloud 5.8150×10-3 1.0184×10-2 1.2836×10-2 3.8142×10-2
Note: -indicating that no density control table 6 Mechanical part 2 image point cloud increase density was performed
Interpolation step value 2 3 4
Number of point clouds 54185 50149 47935
Average distance of point cloud 2.6411×10-3 3.2228×10-3 5.6875×10-3
In conclusion, the regulation and control method disclosed by the invention can be used for controlling the density of the three-dimensional image point cloud by regulating and controlling the density of the two-dimensional feature points, so that the difference between the density of the image point cloud and the density of the laser point cloud is smaller, and the registration and fusion of the two are facilitated; the regulation and control method starts from the density regulation and control of the two-dimensional characteristic points, and performs density regulation and control on the point cloud generated by the two images based on a motion recovery structure method, so that the density of the point cloud can be reduced, and the point cloud data can be simplified; the density of the point cloud can be increased, so that the detail information is supplemented; the invention can control the image to generate the point cloud density by adjusting the number, the size or the interpolation step length of the characteristic points in the cells, and can obtain better regulation and control effect.
Although the present invention has been described above in connection with exemplary embodiments, it will be apparent to those skilled in the art that various modifications and changes may be made to the exemplary embodiments of the present invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A density control method for generating a point cloud based on a motion recovery structure is characterized by comprising the following steps:
generating point clouds from a sequence image, comparing the densities of the generated point clouds and a required target point cloud to determine the point cloud density regulation and control direction to be the point cloud density reduction or point cloud density increase, wherein the sequence image comprises a first image and a second image, and the first image and the second image are used for restraining the position of a target point;
extracting feature points of a first image, setting the size of a cell according to the spatial mapping relation between the first image feature points and a point cloud generated by the first image, and carrying out mesh division on a first image feature point area according to the set cell size, wherein the first image feature point area comprises a plurality of cells;
the point cloud density is manipulated to achieve a decrease or increase in the point cloud density, wherein,
the conditioning of the point cloud density to achieve point cloud density reduction comprises:
respectively calculating the distance between the first image feature point in each cell and the center point of the cell to which the first image feature point belongs, selecting the nearest feature point which is closest to the center point of the cell, and representing all the feature points of the cell to which the first image feature point belongs by using the nearest feature point to realize the reduction of the density of the feature points;
acquiring points matched with the nearest characteristic points in each cell in the second image, wherein the nearest characteristic points in each cell of the first image and the points matched with the nearest characteristic points in the second image form a first matching point pair;
generating an image point cloud according to the first matching point pair;
the regulating the point cloud density to achieve the point cloud density increase comprises:
respectively carrying out curve fitting on the characteristic points in each cell, and carrying out interpolation processing on the fitted curves to realize the increase of the density of the characteristic points;
acquiring points matched with the feature points obtained after interpolation processing is carried out on each cell in the second image, wherein the feature points obtained after interpolation processing in each cell and the matching points of the feature points in the second image form second matching point pairs;
and generating an image point cloud according to the second matching point pair.
2. The method as claimed in claim 1, wherein the reducing the density of the point cloud further comprises adjusting the side length of the cell to achieve the reduction.
3. The method of claim 1, wherein the curve fitting comprises a least squares based curve fitting of the feature points in each cell.
4. The method of claim 1, wherein the obtaining points in the second image that match nearest neighbor feature points in each cell comprises: the KLT feature point tracking algorithm is used to obtain points in the second image that match the nearest feature points of each cell.
5. The method as claimed in claim 1, wherein the obtaining of the points in the second image that match the feature points obtained by interpolation in each cell comprises: and acquiring points matched with the feature points obtained after interpolation processing is carried out on each cell in the second image by using a KLT feature point tracking algorithm.
6. The method of claim 1, wherein the feature points of the first image are strong angular points, and extracting the first image feature points comprises:
filtering all pixels of the first image by using horizontal and vertical difference operators respectively to obtain IxAnd IyAnd from said IxAnd IyObtaining a 2 x 2 order matrix, wherein, the IxAnd IyThe partial derivatives of the grayscale image in the x and y directions, respectively, and the 2 × 2 matrix is:
Figure FDA0002301306780000021
the obtained 2 x 2 order matrix is subjected to gaussian smoothing filtering to obtain a matrix M, wherein,
Figure FDA0002301306780000022
solving the eigenvalues λ of the matrix M1、λ2Setting the number of extracted strong angular pointsThreshold value of interest TCThreshold value T of distance between adjacent strong angular pointsDJudging whether the pixel point (x, y) is a strong angular point, wherein if lambda is1≥λ2And lambda2≥TCOr λ1≥λ2And lambda2≥TDThen, the pixel point (x, y) is determined as the strong angular point, wherein (x, y) is the pixel point coordinate in the two-dimensional coordinate system, and the threshold value TCAnd a threshold value TDIs a given value.
7. The method for regulating and controlling density of point cloud generation based on motion restoration structure of claim 1, wherein the first image feature point region is a region surrounded by the maximum value and the minimum value of the feature point coordinates in the X direction and the Y direction in the two-dimensional coordinate system.
8. A three-dimensional reconstruction method, characterized in that the three-dimensional reconstruction method comprises the density control method of generating a point cloud based on a motion recovery structure according to any one of claims 1 to 7, and the density control method is used for performing three-dimensional reconstruction on the point cloud.
CN201911222767.8A 2019-12-03 2019-12-03 Density regulation and control method for generating point cloud based on motion recovery structure Active CN110956700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911222767.8A CN110956700B (en) 2019-12-03 2019-12-03 Density regulation and control method for generating point cloud based on motion recovery structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911222767.8A CN110956700B (en) 2019-12-03 2019-12-03 Density regulation and control method for generating point cloud based on motion recovery structure

Publications (2)

Publication Number Publication Date
CN110956700A true CN110956700A (en) 2020-04-03
CN110956700B CN110956700B (en) 2022-03-22

Family

ID=69979582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911222767.8A Active CN110956700B (en) 2019-12-03 2019-12-03 Density regulation and control method for generating point cloud based on motion recovery structure

Country Status (1)

Country Link
CN (1) CN110956700B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111781113A (en) * 2020-07-08 2020-10-16 湖南九九智能环保股份有限公司 Dust grid positioning method and dust grid monitoring method
WO2022126427A1 (en) * 2020-12-16 2022-06-23 深圳市大疆创新科技有限公司 Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN105654548A (en) * 2015-12-24 2016-06-08 华中科技大学 Multi-starting-point incremental three-dimensional reconstruction method based on large-scale disordered images
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108010116A (en) * 2017-11-30 2018-05-08 西南科技大学 Point cloud feature point detecting method and point cloud feature extracting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426165A (en) * 2013-06-28 2013-12-04 吴立新 Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN105654548A (en) * 2015-12-24 2016-06-08 华中科技大学 Multi-starting-point incremental three-dimensional reconstruction method based on large-scale disordered images
CN108010116A (en) * 2017-11-30 2018-05-08 西南科技大学 Point cloud feature point detecting method and point cloud feature extracting method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PENGCHANGZHANG等: "A line scan camera-based structure from motion for high-resolution 3D reconstruction", 《JOURNAL OF CULTURAL HERITAGE》 *
曾露露等: "基于从运动中恢复结构的三维点云孔洞修补算法研究", 《光学学报》 *
李峰等: "地面激光扫描联合无人机影像的三维模型重建", 《测绘与空间地理信息》 *
王欣等: "基于运动恢复的双目视觉三维重建系统设计", 《光学精密工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111781113A (en) * 2020-07-08 2020-10-16 湖南九九智能环保股份有限公司 Dust grid positioning method and dust grid monitoring method
CN111781113B (en) * 2020-07-08 2021-03-09 湖南九九智能环保股份有限公司 Dust grid positioning method and dust grid monitoring method
WO2022126427A1 (en) * 2020-12-16 2022-06-23 深圳市大疆创新科技有限公司 Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium

Also Published As

Publication number Publication date
CN110956700B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
US20210042929A1 (en) Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN106934821B (en) Conical beam CT and CT image registration method based on ICP algorithm and B spline
CN106846467B (en) Entity scene modeling method and system based on optimization of position of each camera
CN112967236B (en) Image registration method, device, computer equipment and storage medium
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN111144213B (en) Object detection method and related equipment
CN112686935B (en) Airborne sounding radar and multispectral satellite image registration method based on feature fusion
CN104050666B (en) Brain MR image method for registering based on segmentation
CN110956700B (en) Density regulation and control method for generating point cloud based on motion recovery structure
US11074752B2 (en) Methods, devices and computer program products for gradient based depth reconstructions with robust statistics
CN113298742A (en) Multi-modal retinal image fusion method and system based on image registration
CN109493426B (en) Monocular imaging-based three-dimensional reconstruction method and system for blast furnace charge level
CN114387392A (en) Method for reconstructing three-dimensional human body posture according to human shadow
CN110738730A (en) Point cloud matching method and device, computer equipment and storage medium
CN112132971B (en) Three-dimensional human modeling method, three-dimensional human modeling device, electronic equipment and storage medium
JP4887491B2 (en) MEDICAL IMAGE PROCESSING METHOD, DEVICE THEREOF, AND PROGRAM
CN111951295A (en) Method and device for determining flight trajectory based on polynomial fitting high precision and electronic equipment
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
CN107705244B (en) Edge connection correction method suitable for large-area multi-remote sensing image
CN111611997B (en) Cartoon customized image motion video generation method based on human body action migration
CN112508007B (en) Space target 6D attitude estimation method based on image segmentation Mask and neural rendering
CN114998496A (en) Orthoimage rapid generation method based on scene aerial photography image and sparse point cloud
CN108596950B (en) Rigid body target tracking method based on active drift correction
CN109242910B (en) Monocular camera self-calibration method based on any known plane shape
WO2020115866A1 (en) Depth processing system, depth processing program, and depth processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant