Disclosure of Invention
The invention designs a line laser-based three-dimensional overall measurement method for precision parts, which can be applied to high-precision three-dimensional measurement and overcomes the defect of poor quality of traditional optical modeling point clouds.
The hardware system of the three-dimensional overall measurement method comprises the following steps:
the line laser profile scanner is used for acquiring the profile of the part;
a computer for precision control, contour acquisition and data processing;
the grating ruler and the encoder are used for synchronously triggering the acquisition signals;
a multi-axis scanning platform for fixing the scanner probe and motion control;
a standard gauge block and a calibration ball for calibration;
the invention designs a three-dimensional full-view measuring method of a precision part based on line laser, which is characterized by comprising the following steps:
step 1: starting a line laser sensor for collecting the outline of the precision part and the multi-axis scanning platform for motion control; acquiring the measured object once by using the line laser sensor to obtain the section profile P of the measured object; carrying out translation scanning on the measured object, namely acquiring the profile for multiple times along one axis to obtain batch processing profile data B;
step 2: performing inclination correction on the section profile P in the step 1 by using a least square method; let the measured cross-sectional profile be P (x)i,yi) i is 1, 2 …, n, and the compensated cross-sectional profile P' (x) is obtained according to equation (1)i,yi-axi-b);
And step 3: respectively calibrating the x, y and z axial directions of a sensor coordinate system by utilizing a step gauge block, and setting a step gauge block standard wheelProfile is S, and profile measurement error e is obtained according to formula (2)pObtaining minimum values under different compensation proportions c;
epmin ∑ (cP' -S) formula (2)
And 4, step 4: obtaining calibrated profile data cP' through the steps, determining a world coordinate system and a unit of the calibrated profile data according to the coordinate system of the line laser sensor in the step 3, converting the calibrated profile data into three-dimensional point cloud data, fixing an x-axis coordinate corresponding to each point, determining a z-axis coordinate by the size of the step gauge block in the step 3, and determining a y-axis coordinate by the acquisition frequency and the step length of the line laser sensor in the step 1;
and 5: carrying out bilateral filtering on the three-dimensional point cloud according to a formula (3) to remove point cloud noise, and calculating to obtain d as an adjustment distance of a point to be processed in the normal vector direction of the point to be processed; calculating the required weight W according to equation (4)
cAnd W
sLet P be a point in P, N (P) represent the k neighborhood of point P, | P-P
iI denotes the slave point p
iThe length of the modulus of the vector to point p,
representing the normal vector at the point p,
representing a vector
And from point p
iThe inner product of the vectors to point p;
step 6: calibrating a rotating shaft of the multi-shaft scanning platform by using the standard sphere; with a radius of R
BThe calibration ball is placed on a rotary table of the scanning platform, and the calibration ball passes through the step 1The translation scanning method of the method obtains point cloud data of a part of spherical surface to obtain the spherical coordinates of the spherical surface
Constructing a nonlinear equation system according to the formula (5);
and 7: solving the sphere center coordinate O of the spherical surface in the step 6n(xn,yn,zn) Driving the rotary table in the step 6 to rotate, respectively measuring the three-dimensional data of the spherical surface in the step 6 at N (N is more than or equal to 4) positions, obtaining N groups of spherical center coordinate data by using the step 6, and constructing N groups of linear equation sets by using the spherical center coordinates according to the formula (6);
Axn+Byn+Czn+ D is 0 formula (6)
And 8: solving the linear equation set in the step 7, and fitting the spherical center O in the step 6nEquation P of the plane of the rotation trajectoryBCalculating P according to equation (7)BThe normal vector u of (a); the coordinates O of the sphere center in the step 6nSubstituting the formula (8) to obtain the intersection point O of the rotating shaft of the sphere center in the step 6 and the plane where the sphere center is located in the step 6n′(xn′,yn′,zn′);
And step 9: let RuFor the rotation matrix of the point cloud to be converted, T (x)n′,yn′,zn') is a translation matrix of the point cloud to be converted, and the rotation angle is theta; combining u and O calculated in step 8nCalculating the coordinates of each point cloud under different viewing angles in a corresponding world coordinate system by using the formulas (9) and (10);
step 10: converting the point clouds of all parts in the step 9 into the same coordinate system according to the scanning sequence of the rotating shaft to finish rough registration of the point clouds, and then performing fine registration on the point clouds after the rough registration by searching an iterative closest point; let two point clouds to be registered be Pi,QiDefining an objective function epsilon (a) according to formula (11), and calculating the rotational-translational matrix R of step 9u,T(xn′,yn′,zn') obtaining a rigid transformation matrix R, T meeting the threshold value by adopting a traversal search method;
wherein Φ P is PiMiddle point piCorresponding points of the fitted surface are located;
step 11: for the point cloud data P to be transformed in the step 10iSubstituting the rigid body transformation matrix and the rotational translation matrix R, T in the step 10 into a formula (12) to obtain the point cloud Q after registration in the step 10i;
Qi=RPi+ T formula (12)
Step 12: repeating the step 10 and the step 11 to finish the fine registration of all the visual angle point clouds to be converted to obtain the three-dimensional full-looking point cloud data of the measured object;
step 13: down-sampling the point cloud by establishing a grid, simplifying the point cloud data in the step 12, and performing point cloud simplification on the point cloud data obtained in the step 12 according to a formula (13), whereinD is the side length of the voxel grid, α is a scale factor, N is the number of point clouds of the point cloud obtained in step 12, (D)x,Dy,Dz) The maximum (x) of the point cloud data obtained in the step 12 in the directions of the three coordinate axes of x, y and z is shownmax,ymax,zmax) Minimum (x)min,ymin,zmin) Coordinate values and are obtained according to the formula (14);
step 14: converting the simplified point cloud data in the step 13 into polygons for surface reconstruction by a method for constructing a neural network, and constructing a network kernel function according to a formula (15)
And the radial range of action sigma and the centre of action P are determined
i(ii) a Let omega
iInterpolating the point cloud P simplified in the step 13 according to a formula (16) for the network weight to be trained to obtain a full-looking high-precision three-dimensional reconstruction f (P) of the measured object;
and finishing the operation of the measured three-dimensional overall information. The flow chart of the three-dimensional overall measurement method related to the patent of the invention is shown in figure 1. And f (P) is the three-dimensional overall information of the object to be measured.
The invention has the beneficial effects that: the three-dimensional point cloud overall measurement method introduced by the invention can solve the problems of large point cloud noise, low matching precision and the like of small complex workpieces, can realize high-precision three-dimensional overall measurement of complete point cloud data of the small workpieces without spraying a developer, and avoids the defects in the traditional structured light three-dimensional measurement method.
Detailed Description
The laser triangulation method is to project a laser beam to the surface of an object to be measured by using a laser, and calculate the cross-sectional shape of the object according to the change of laser light bars acquired by a camera.
According to determination, the width (x axial direction) of the acquired stripe is 16mm, the batch processing length (y axial direction) is 100mm, the height difference range (z axial direction) is 16mm, and the highest acquisition frequency is 500Hz when the stripe is shot by a line laser sensor in an experimental scene.
The gauge block ladder is formed by combining standard gauge blocks with the thicknesses of 1.08mm, 1.5mm and 2mm into a ladder, and the scanned point cloud data of the gauge block ladder is shown in figure 2. The edge data is removed to reduce the error, and the calibrated tilt compensation line is y equal to 1.167 multiplied by 10-5x +1.078, and the compensation ratio c is 0.993.
The radius of a calibration ball used for calibrating the rotating shaft is R
BThe method comprises the steps of rotating a 10mm ceramic ball clockwise for 60 degrees each time, measuring for 6 times in total, obtaining point cloud data of 6 spheres, and calculating coordinates of 6 sphere centers
And calculating the normal vector u of the rotating shaft and the equation P of the rotating plane
BFig. 3 is a schematic diagram of a method for calibrating a rotating shaft.
And (3) converting the 6 parts of point clouds into a rotation and translation matrix according to theta (i is i multiplied by 60) and (i is 1, 2, … 6) for coordinate transformation according to a rotating shaft calibration result and a fine registration method.
And (3) performing curved surface reconstruction on the point cloud data through constructing a neural network to finally obtain complete three-dimensional overall data of the workpiece, wherein an effect diagram of three-dimensional overall measurement of the workpiece is shown in fig. 4.
The invention designs a three-dimensional full-view measuring method of a precision part based on line laser, which is characterized by comprising the following steps:
step 1: starting a line laser sensor for collecting the outline of the precision part and the multi-axis scanning platform for motion control; acquiring the measured object once by using the line laser sensor to obtain the section profile P of the measured object; carrying out translation scanning on the measured object, namely acquiring the profile for multiple times along one axis to obtain batch processing profile data B;
step 2: performing inclination correction on the section profile P in the step 1 by using a least square method; let the measured cross-sectional profile be P (x)i,yi) The compensated cross-sectional profile P' (x) is obtained as follows, i being 1, 2 …, ni,yi-axi-b);
And step 3: respectively calibrating the x, y and z axial directions of a sensor coordinate system by utilizing a step gauge block, setting the standard profile of the step gauge block as S, and enabling the profile measurement error e according to the following formulapObtaining minimum values under different compensation proportions c;
ep=min∑(cP′-S)
and 4, step 4: obtaining calibrated profile data cP' through the steps, determining a world coordinate system and a unit of the calibrated profile data according to the coordinate system of the line laser sensor in the step 3, converting the calibrated profile data into three-dimensional point cloud data, fixing an x-axis coordinate corresponding to each point, determining a z-axis coordinate by the size of the step gauge block in the step 3, and determining a y-axis coordinate by the acquisition frequency and the step length of the line laser sensor in the step 1;
and 5: carrying out bilateral filtering on the three-dimensional point cloud according to the following formula to remove point cloud noise, and calculating to obtain d as an adjustment distance of a point to be processed in the normal vector direction of the point to be processed; according to the following formulaCalculating the required weight W
cAnd W
sLet P be a point in P, N (P) represent the k neighborhood of point P, | P-P
iI denotes the slave point p
iThe length of the modulus of the vector to point p,
representing the normal vector at the point p,
representing a vector
And from point p
iThe inner product of the vectors to point p;
step 6: calibrating a rotating shaft of the multi-shaft scanning platform by using the standard sphere; with a radius of R
BThe calibration ball is placed on a rotary table of the scanning platform, and the point cloud data of part of the spherical surface is obtained by the translation scanning method of the method in the
step 1 to obtain the spherical coordinates of the spherical surface
Constructing a nonlinear equation system according to the following formula;
and 7: solving the sphere center coordinate O of the spherical surface in the step 6n(xn,yn,zn) Driving the rotary table in the step 6 to rotate, respectively measuring the three-dimensional data of the spherical surface in the step 6 at N (N is more than or equal to 4) positions, obtaining N groups of spherical center coordinate data by using the step 6, and constructing N groups of linear equation sets by using the spherical center coordinates according to the following formula;
Axn+Byn+Czn+D=0
and 8: solving the linear equation set in the step 7, and fitting the spherical center O in the step 6nEquation P of the plane of the rotation trajectoryBP is calculated as followsBThe normal vector u of (a); the coordinates O of the sphere center in the step 6nSubstituting the formula to obtain the intersection point O of the rotating shaft of the sphere center in the step 6 and the plane where the sphere center is located in the step 6n′(xn′,yn′,zn′);
And step 9: let RuFor the rotation matrix of the point cloud to be converted, T (x)n′,yn′,zn') is a translation matrix of the point cloud to be converted, and the rotation angle is theta; combining u and O calculated in step 8nCalculating the coordinates of each point cloud under different viewing angles in a corresponding world coordinate system by using the following formula;
step 10: converting the point clouds of all parts in the step 9 into the same coordinate system according to the scanning sequence of the rotating shaft to finish rough registration of the point clouds, and then performing fine registration on the point clouds after the rough registration by searching an iterative closest point; let two point clouds to be registered be Pi,QiDefining an objective function ε (a) according to the following formula, and calculating in step 9The rotational translation matrix Ru,T(xn′,yn′,zn') obtaining a rigid transformation matrix R, T meeting the threshold value by adopting a traversal search method;
wherein Φ P is PiMiddle point piCorresponding points of the fitted surface are located;
step 11: for the point cloud data P to be transformed in the step 10iSubstituting the rigid body transformation matrix and the rotational translation matrix R, T in the step 10 into the following formula to obtain the point cloud Q after registration in the step 10i;
Qi=RPi+T
Step 12: repeating the step 10 and the step 11 to finish the fine registration of all the visual angle point clouds to be converted to obtain the three-dimensional full-looking point cloud data of the measured object;
step 13, down-sampling the point cloud by establishing a grid, simplifying the point cloud data in the step 12, and performing point cloud simplification on the point cloud data obtained in the step 12 according to the following formula, wherein D is the side length of a voxel grid, α is a scale factor, N is the point cloud number of the point cloud obtained in the step 12, (D)x,Dy,Dz) The maximum (x) of the point cloud data obtained in the step 12 in the directions of the three coordinate axes of x, y and z is shownmax,ymax,zmax) Minimum (x)min,ymin,zmin) Coordinate values and are obtained according to the following formula;
step 14: converting the simplified point cloud data in the step 13 into polygons for surface reconstruction by a method for constructing a neural network, and constructing a network kernel function according to the following formula
And the radial range of action sigma and the centre of action P are determined
i(ii) a Let omega
iInterpolating the point cloud P simplified in the step 13 according to the following formula for the network weight to be trained to obtain the full-appearance high-precision three-dimensional reconstruction f (P) of the measured object;
the flow chart of the three-dimensional overall measurement method related to the patent of the invention is shown in figure 1.
The biggest difference between the method and the existing three-dimensional reconstruction method is as follows: the existing three-dimensional reconstruction method has the problems of insufficient reconstruction precision of small workpieces and higher requirements on surface materials, thereby causing point cloud loss, high point cloud noise, low splicing precision and the like, and the problems can be avoided only by spraying a developer. The three-dimensional overall measurement method designed by the invention has the advantages that the extracted contour and the fused point cloud not only contain more detailed information, but also contain complete global information of the whole object through two calibration methods, so that the problems existing in the existing method are fundamentally solved. Therefore, the method designed by the invention can solve the problem of high-precision three-dimensional full-view measurement of small workpieces.
In summary, the three-dimensional overall measurement method of the present invention has the following advantages:
(1) the correction method for extracting the contour is more accurate, so that the three-dimensional full-view measurement method designed by the invention is more in line with the contour size of an actual model.
(2) Due to the double registration method of the rotation axis calibration and the closest point iteration, the three-dimensional reconstruction method designed by the invention can realize high-precision point cloud registration.
(3) The measurement problem of objects with different surface materials is solved through bilateral filtering and point cloud simplification, coloring materials such as a spraying developer and the like are not needed, the measurement process is green and environment-friendly, and the material consumption cost in measurement is saved.
The invention and its embodiments have been described above schematically, without limitation, and the figures shown in the drawings represent only one embodiment of the invention. Therefore, if persons skilled in the art should be informed by the teachings of the present invention, other similar components or other arrangements of components may be adopted without departing from the spirit of the present invention, and technical solutions and embodiments similar to the technical solutions may be creatively designed without departing from the scope of the present invention.