CN113888693A - High-precision point cloud data reconstruction method - Google Patents

High-precision point cloud data reconstruction method Download PDF

Info

Publication number
CN113888693A
CN113888693A CN202110944202.1A CN202110944202A CN113888693A CN 113888693 A CN113888693 A CN 113888693A CN 202110944202 A CN202110944202 A CN 202110944202A CN 113888693 A CN113888693 A CN 113888693A
Authority
CN
China
Prior art keywords
point cloud
axis
cloud data
detection module
light detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110944202.1A
Other languages
Chinese (zh)
Inventor
方宇
宁业衍
杨蕴杰
杨皓
陶翰中
张汝枭
李皓宇
张伯强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110944202.1A priority Critical patent/CN113888693A/en
Publication of CN113888693A publication Critical patent/CN113888693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of three-dimensional measurement, and discloses a high-precision point cloud data reconstruction method which comprises a detection platform, wherein a motion mechanism and a support are arranged on the detection platform, a linear structure light detection module for data acquisition is arranged on the support, a laser beam emitted by the linear structure light detection module vertically irradiates to an object to be detected, a sampling period is taken as a unit, according to a similar triangle principle, a Z-axis coordinate of a point to be detected is obtained by combining a measurement range of the linear structure light detection module in a Z-axis direction, then, a measurement range of the linear structure light detection module in an X-axis direction is combined, an equal difference series formula is utilized to obtain an X-axis coordinate of the point to be detected, then, according to physical parameters of the linear structure light detection module, the times of the sampling period and the movement of the motion mechanism, a Y-axis coordinate of the point to be detected is obtained, three-dimensional point cloud data is reconstructed, and finally, the reconstructed three-dimensional point cloud data is reconstructed according to a space included angle between a camera coordinate system and a world coordinate system of the linear structure light detection module and the motion mechanism And performing compensation adjustment according to the data.

Description

High-precision point cloud data reconstruction method
Technical Field
The invention relates to the technical field of three-dimensional measurement, in particular to a high-precision point cloud data reconstruction method.
Background
In recent years, with the development of sensor and computer technologies, the progress in the field of three-dimensional vision has advanced the object quality detection technology in the field of industrial products. The object quality detection is the most important part in the industrial production process, determines the quality of the whole product, and has higher requirements on the object detection technology along with the continuous improvement of the object precision and the processing automation degree.
The acquisition of object surface shape information data is the key of quality detection. The traditional manual detection method mainly adopts a contact type engineering measurement method, comprises a stylus type three-coordinate measuring instrument, a micrometer, a vernier caliper and the like, is suitable for acquiring data information of parts with regular size and few characteristic curved surfaces, and is characterized by accurate measurement repetition precision and high reliability; but the requirements on the use environment and the positioning and attitude determination are higher, the measurement points need to be manually selected in the measurement process, and the measurement efficiency is lower; in addition, the contact measurement requires a certain contact force, which may cause deformation of the surface of the object and affect the precision of the precision measurement of the soft surface. Modern industrial detection methods mainly adopt non-contact engineering measurement methods, including laser scanning technology, image measurement technology, ultrasonic measurement technology, stereoscopic vision measurement technology and projection grating measurement technology.
The non-contact engineering measurement method does not need to contact the surface of a measured object, avoids the damage of the surface of the object, is suitable for the detection of objects with soft surfaces and complex surfaces, can quickly acquire three-dimensional point cloud data information of the surface of the object, is applied to the engineering field, has higher flexibility and practicability, and is commonly used for non-contact engineering measurement due to the simple structure and easy realization of the direct laser triangulation distance measurement method.
The acquisition and fusion of the point cloud data are the core in non-contact engineering measurement. The main influencing factors of the detection performance and the precision of the industrial product object detection system are a hardware module and a measurement principle of the detection system, and the precision and the accuracy of point cloud data acquisition determine the detection precision and relate to the accuracy of object quality judgment.
The existing phase line structure light camera acquisition and fusion has the following problems:
(1) the acquired point cloud data is Z-axis depth information of an object, and the high-precision point cloud data cannot be reconstructed due to the fact that the motion speed of the electric sliding table, the sampling frequency of the line laser scanner and the sampling distance are deviated and cannot be effectively and correspondingly fused with the X-axis data and the Y-axis data.
(2) The traditional linear array structured light is divided into a horizontal mode and a rotating mode, and the modes of object fixing and linear array structured light camera moving are adopted. The horizontal scanning refers to that the line laser scanner moves above an object according to a fixed X-axis or Y-axis track road line to obtain point cloud information of the object; the rotation scanning means that the linear array structured light camera acquires three-dimensional point cloud information of an object according to a certain rotation angle, and because the linear array structured light camera needs to be moved in horizontal scanning and rotation scanning, a distance measurement error is inevitably generated in the process that surface information is fed back to the sensor, and the precision of point cloud data is influenced.
(3) In the process of acquiring the object surface information by the line structured light, system errors inevitably exist due to the imaging principle and the hardware structure of the line structured light, and the acquired position information is distorted, so that the accuracy of point cloud data is influenced.
Disclosure of Invention
The invention provides a high-precision point cloud data reconstruction method, which is characterized in that according to the laser triangulation principle, a linear array structure light detection module is adopted for fixing, and a to-be-detected object moves on a movement mechanism, so that a sensor can more stably receive the surface information of the object; establishing a mathematical model for data fusion of the linear structure optical camera and the motion mechanism, deducing the corresponding relation between the sampling frequency and the sampling interval of the motion mechanism and the linear structure optical camera, corresponding Y-axis information with X-axis and Z-axis information, completing high-precision point cloud data reconstruction, and eliminating data errors caused by the fact that the motion mechanism does not correspond to the sampling frequency and the sampling interval; the method has the advantages that the line structured light local coordinate system and the three-dimensional world coordinate system are established, distortion compensation is carried out on included angle errors existing in the data reconstruction model, robustness of point cloud data acquisition is improved, influences of scanning angles and scanning distances on the point cloud data are reduced, establishment of topological relations of the point cloud data of parts and subsequent three-dimensional reconstruction are facilitated, and requirements of part detection application in a three-dimensional scene are met.
The invention can be realized by the following technical scheme:
a high-precision point cloud data reconstruction method comprises a detection platform, wherein a movement mechanism and a support are arranged on the detection platform, the movement mechanism is used for driving an object to be detected to move along/around an X axis and a Y axis, a line structure light detection module is arranged on the support, a laser beam emitted by the line structure light detection module vertically irradiates towards the object to be detected to complete point cloud data collection of the object to be detected, a sampling period is taken as a unit, a Z axis coordinate of the point to be detected is obtained according to a similar triangle principle by combining a measurement range of the line structure light detection module in the Z axis direction, an X axis coordinate of the point to be detected is obtained by using an arithmetic progression formula by combining a measurement range of the line structure light detection module in the X axis direction, then a Y axis coordinate of the point to be detected is obtained according to physical parameters of the line structure light detection module and the times of the sampling period, and three-dimensional point cloud data is reconstructed, and finally, performing compensation adjustment on the reconstructed three-dimensional point cloud data according to a space included angle between a camera coordinate system of the line structure light detection module and the world geographic coordinates to obtain final reconstructed data.
Further, the limit of the moving distance of the moving mechanism driving the object to be measured to move along the Y axis is recorded as one scanning, the line-structured light detection module performs W times of sampling during one scanning, R times of scanning are required to be performed in total, data acquisition on the object to be measured is completed, and when the W times of sampling is performed in the first scanning, the coordinate information of the ith point to be measured is:
Figure BDA0003216203550000031
in the u-th scanning, the coordinate information of the ith point in the w-th sampling is as follows:
Figure BDA0003216203550000032
wherein D is3Indicating the measuring range of the line-structured light detecting module in the X-axis direction, n2Denotes the number of points to be measured, n, which are uniformly distributed in the measuring range in the X-axis direction1Representing the total number of points to be tested, t1Denotes the sampling period, v1Showing the motion speed of the motion mechanism along the Y axis, delta h showing the measurement range of the line structure light detection module in the Z axis direction, h3And h represents the measuring distance of the point to be measured of the linear structure light detection module in the measuring range of the Z-axis direction corresponding to the photosensitive element.
Further, the motion mechanism drives the object to be measured to perform one scan along the Y axis, then moves the distance D3 along the X axis, performs a second scan along the Y axis, and so on, performs R scans, and completes data acquisition of the object to be measured.
Further, in the present invention,
Figure BDA0003216203550000041
wherein h is1And f represents the focal length of the camera in the line structured light detection module.
Further, the camera coordinate system of the line-recording structure light detection module is O-XYZ, and the world coordinate system is O1-X1Y1Z1The space included angle between the two is thetaxyzProjecting them to XOY, XOZ and YOZ planes in turn, then X1The projection of the axis to the XOY plane forms an included angle thetaz,X1Axis and Z1The axis projection forms an included angle theta to the XOZ planey,Z1The axis projection forms an included angle theta to the YOZ planexThe reconstructed three-dimensional point cloud data is compensated and adjusted by adopting the following formula,
Figure BDA0003216203550000042
further, an object with steps and known size is placed on the moving mechanism, point cloud data acquisition is carried out by using the line structured light detection module, and then the included angle theta is calculated according to the result of point cloud data acquisition and the actual size of the objectx、θy、θz
The beneficial technical effects of the invention are as follows:
(1) according to the laser triangulation principle, a mathematical model for reconstructing data of the linear structure light detection module and the motion mechanism is established, and according to the corresponding relation between the motion mechanism parameters and the sampling frequency and the sampling interval of the linear structure light camera, Y-axis information corresponds to X-axis and Z-axis information, so that high-precision point cloud information reconstruction is completed.
(2) Distortion compensation is carried out on included angle errors existing in the data reconstruction model process, distortion interference is reduced, point cloud precision is improved, robustness of point cloud data acquisition is improved, influences of scanning angles and scanning distances on the point cloud data are eliminated, topological relation establishment and subsequent three-dimensional reconstruction of part point cloud data are facilitated, and requirements of part detection application in a three-dimensional scene are met.
(3) An automatic detection platform is built, high-precision three-dimensional detection of objects assisted by external positioning is not relied on, a motion mechanism is integrally installed above a vibration isolation damping platform, and vibration deviation in the process of driving the objects to scan by the motion mechanism is reduced to the maximum extent. The linear array structure optical camera is used for collecting data, an object to be detected does not need to be moved, the motion mechanism plans a track route of the object to be detected, manual detection is replaced, measuring errors caused by manual operation are avoided, object surface damage is avoided, and the automation degree and the detection precision degree of detection are improved.
(4) By adopting the linear array structured light fixation and the mode that the object moves on the motion mechanism, the sensor can more stably receive the surface information of the object, reduce the distance measurement error and increase the precision of point cloud collection, and compared with the existing scanning mode, the method has more measurement advantages.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic structural diagram of an object automated online inspection platform of the present invention, wherein 1-1 is a profile support, 1-2 is a line structured light camera, 1-3 is a placed object to be inspected, 1-4 is a stepping motor, 1-5 is an electric sliding table, and 1-6 is a vibration isolation platform;
FIG. 3 is a schematic diagram of the principle of triangulation distance measurement of the present invention, in which 3-1 is a line structured light camera, 3-2 is a photosensitive element, and 3-3 is a photosensitive lens;
FIG. 4 is an acquisition schematic diagram of a line structured light camera according to the present invention, wherein 4-1 is the line structured light camera, 4-2 is the photosensitive element, 4-3 is the photosensitive lens, and 4-4 is the surface of the to-be-measured component;
FIG. 5 is a three-dimensional schematic of data fusion according to the present invention;
FIG. 6 is a two-dimensional schematic of data fusion according to the present invention;
FIG. 7 is a schematic diagram of the target path planning of the present invention, wherein 7-1 is the surface of the object to be measured;
FIG. 8 is a schematic diagram of a local coordinate system and a compensated coordinate system according to the present invention;
FIG. 9 is a schematic view of a local coordinate system angle projection according to the present invention.
FIG. 10 is a schematic diagram of the variation of the distorted position information of the object according to the present invention;
FIG. 11 shows the angle θ of the line structured light camera according to the present inventionxA schematic diagram;
FIG. 12 shows the angle θ of the line structured light camera according to the present inventionyA schematic diagram;
FIG. 13 is a diagram of the angle θ of the line structured light camera according to the present inventionzA schematic diagram;
FIG. 14 shows the angle θ of the present inventionxCalculating a schematic diagram;
FIG. 15 shows the angle θ of the present inventionyCalculating a schematic diagram;
FIG. 16 shows the angle θ of the present inventionzCalculating a schematic diagram;
FIG. 17 shows the original scanning result of the dual-layer hole member of the present invention;
FIG. 18 shows the angle θ for a dual layer orifice member of the present inventionxCalculating a schematic diagram;
FIG. 19 shows the angle θ for a dual layer orifice member of the present inventionyAnd thetazCalculating a schematic diagram;
FIG. 20 is a schematic diagram showing a comparison between before and after distortion compensation of a dual layer orifice member according to the present invention;
FIG. 21 is a two-dimensional schematic of data reconstruction for a sample meter, a dual-layer orifice and a grid assembly of the present invention;
FIG. 22 is a schematic representation of the point cloud registration results of the sample meter, the dual layer orifice and the grid assembly of the present invention;
FIG. 23 is a statistical view of deviation values for a pair of meter points according to the present invention;
FIG. 24 is a statistical view of deviation values for a dual-layer hole pattern of the present invention;
FIG. 25 is a statistical view of offset values for gate device points according to the present invention.
Detailed Description
The following detailed description of the preferred embodiments will be made with reference to the accompanying drawings.
As shown in figure 1, the invention provides a high-precision point cloud data reconstruction method based on a fixed line structured light camera, as shown in figure 2, a line structured light detection module 1-2 is fixed above a motion mechanism 1-5, an object to be detected 1-3 is arranged on the surface of the motion mechanism 1-5, a control module drives the motion mechanism 1-5 to move horizontally, the object to be detected 1-3 moves along with the motion of the motion mechanism 1-5, sequentially passes through the scanning range of the line structured light camera 1-2, a mathematical model of data fusion of the line structured light camera 1-2 and the high-precision motion mechanism 1-5 is established according to the laser triangulation principle, the corresponding relation of the selected motion mechanism parameters and the sampling frequency and the sampling interval of the line structured light camera is deduced, and Y-axis information is corresponding to X-axis and Z-axis information, the high-precision point cloud information fusion is completed, data errors caused by the fact that a movement mechanism does not correspond to sampling frequency and sampling intervals are eliminated, the linear structure optical camera 1-2 and the movement mechanism 1-5 are arranged on the surface of the vibration isolation platform 1-6, the linear structure optical camera 1-2 is connected with the section bar support 1-1 in a screw installation mode, the linear structure optical detection module 1-2 is fixed above the movement mechanism 1-5, and the placed part to be detected 1-3 is arranged on the movement mechanism 1-5.
The camera of the line structure light detection module 1-2 emits laser beams perpendicular to the surface of an object to be detected, the laser beams are converged to the surface of the object to be detected through the photosensitive lens 3-3 to form light spots, the light spots containing reflected light are transmitted to the imaging lens through diffuse reflection and then converged on the photosensitive element 3-2, and signals of the line structure light camera are converted into electric signals.
The laser beam emits the change of the reflection angle along with the fluctuation change of the height of the surface of the object to be measured, the reflected beam correspondingly moves through the imaging position of the photosensitive element 3-2 through the photosensitive lens 3-3, when the laser scanner 3-1 is fixed with the photosensitive element 3-2, the object distance, the distance and the focal length parameter are also determined, the imaging position relation of the light spot is deduced by combining the space geometric position, the change condition of the surface of the object to be measured is obtained, and the specific depth information of the Z axis is obtained by the similar triangle principle.
The acquisition of the Z-axis depth information of the object to be measured is based on the principle of triangulation, as shown in FIG. 4, the line structured light camera emits a laser beam perpendicular to the surface of the object to be measured, the laser beam is transmitted to the surface of the object to be measured through the photosensitive lens to form a light spot, the light spot containing reflected light is transmitted to the imaging lens through diffuse reflection and then is gathered on the photosensitive element, the signal of the line structured light camera is converted into an electric signal, and the line structured light camera emits a plane D formed by the laser beam1The plane formed by the photosensitive element is D2,D1And D2The plane intersects with the line segment D3Line segment D3Namely the measurement range of the line structure light detection module in the Z-axis direction.
Line segment A1A2And B1B2Intersecting at the point O, the measurement range, i.e. the range of measurement, of the line structured light camera in the Z-axis direction is:
A1B1=△h (1)
the corresponding distance on the photosensitive element is:
A2B2=h3 (2)
through B1、B2Stippling as A1A2Perpendicular to the line (A), the resulting intersection point being B3、B4Derived from the similar triangle theorem:
Figure BDA0003216203550000081
therein of
Figure BDA0003216203550000082
The compound represented by formula (4) is introduced into formula (3):
Figure BDA0003216203550000083
the relation between the linear structure light camera, the photosensitive element, the photosensitive lens and the linear structure light camera measuring range can be obtained.
The object distance of the camera is set to be U, the image distance is set to be V, the focal length is set to be f, and the imaging principle of the convex lens can be used for obtaining the following results:
Figure BDA0003216203550000084
when the light spot is at A1When located in the reference plane, U is h1,V=h2The belt (6) can be:
Figure BDA0003216203550000091
the formula (7) is introduced into the formula (5) to obtain:
Figure BDA0003216203550000092
when the position of the linear array structure optical camera and the photosensitive elementWhen the position is determined, the parameters of the object distance U, the image distance V and the focal length f are also determined, and h can be obtained by the photosensitive element3The value of Δ h can be obtained from this equation (8), and this is the range of the laser sensor.
The measuring range of the known linear array structured light camera is delta h, and a certain point i on the surface of an object to be measured is assumed to be located at A1B1Middle ziPoint of which is at A2B2The upper corresponding point is zi' Point, then, can be derived from the principle of similar triangles:
Figure BDA0003216203550000093
with B1The plane is a reference plane, which can be made to coincide with the XOY plane, and B2Zi' is the corresponding distance on the photosensitive element, which can be obtained from the parameters of the line structured light camera itself, then B1ZiAnd obtaining Z-axis depth information of a certain point i on the surface of the object to be measured, namely the high-precision Z-axis direction point cloud data needing to be obtained.
The object to be measured is placed on the surface of the moving mechanism, and as shown in fig. 5 and 6, the object to be measured moves along with the movement of the mechanism. Calculating the measurement width range D of the line structured light camera in the X-axis direction according to the hardware equipment parameters3And acquiring the number of points in the width range of the upper computer, and deriving the position information of the X axis of each corresponding point by using an arithmetic progression formula as the width and the number of points are uniformly distributed.
Let line segment D3Has n at the top2Points, wherein any point is xi(1≤i≤n2) Distance between each point being d3Then d is3The calculation formula of (2) is as follows:
Figure BDA0003216203550000094
the x-coordinate of any data point i can be listed by the equation of arithmetic series:
xi=d3(i-1)(1≤i≤n2) (11)
substituting equation (10) into equation (11) results in:
Figure BDA0003216203550000101
suppose that the sampling frequency of the line structured light camera is f1With a sampling period of t1With a triggering distance of DeltaS, the movement mechanism being at time t1The distance traveled inside is S1Speed is set to v1mm/s。
Sampling period t of line structured light camera1Comprises the following steps:
Figure BDA0003216203550000102
at this time, the movement mechanism is at time t1Distance S traveled inside1Comprises the following steps:
S1=v1×t1 (14)
i.e. the distance traveled by the movement mechanism between the current sampling and the next sampling is S1This is the positive direction of the Y axis shown in FIG. 6.
Considering that the scan width of the linear structured light camera does not necessarily cover the whole scan size of the object to be measured due to the limitation of the hardware equipment of the linear structured light camera, the scan path of the object to be measured needs to be planned, as shown in fig. 7, assuming that the length and the width of the object to be measured are S2、S3When S is3Measuring width range D larger than line structured light camera3Determining the distance to be scanned in the Y-axis direction according to the length of the part, and moving the part D in the X-axis direction after each scanning is finished3mm, setting the scanning times of the current line structure light camera as u, and moving the point cloud data file generated by the line structure light camera and the moving mechanism D each time3The scanning path of the line structured light camera is set correspondingly to the times, and the moving path of the moving mechanism is the path pointed by the arrow. Setting the total sampling times as W, taking the first data point of the point cloud data file stored in the first sampling as an initial coordinate origin, and taking the O point as the origin to establish a local partAnd a coordinate system, wherein the local coordinate system is a coordinate system fixedly connected with the line structured light camera, the X axis is vertically upward, the positive direction of the Y axis is the moving direction of the high-precision motion mechanism, and the Z axis is perpendicular to the XOY plane.
Setting the scanning times of the line structured light camera as R, the Y-axis direction as the motion direction of the motion mechanism, and sampling period t according to the motion mechanism1Distance S traveled inside1I.e. the distance traveled by the movement mechanism between the current sample and the next sample is S1Then, the Y-axis coordinate formula of any data point i at the w-th sampling is:
yi=S1×(w-1)(1≤i≤n1) (15)
the coordinate information of the ith point in the w-th sampling of the first scanning is as follows:
Ni=(xi,yi,zi)(1≤i≤n1) (16)
substituting to obtain NiExpression (c):
Figure BDA0003216203550000111
in the u-th scanning, the coordinate information of the ith point in the w-th sampling is as follows:
Figure BDA0003216203550000112
the format of the point cloud data file generated by the line structured light is csv, the number of columns corresponds to the Z-axis depth information of each sampling data point, and the number of rows corresponds to the total sampling times.
And combining the position information of the X axis, the Y axis and the Z axis, namely establishing a mathematical model for the linear structure optical camera and the high-precision motion mechanism mobile data fusion, thereby reconstructing three-dimensional point cloud data and finishing the reconstruction of the high-precision point cloud data.
And verifying the correctness of the fusion method, and selecting the electric sliding table by the motion mechanism to substitute the electric sliding table into the data fusion model for calculation. And obtaining Y-axis position information of a point cloud data corresponding point sampled each time according to the corresponding relation between the pulse number of the stepping motor of the movement mechanism and the sampling frequency and the sampling interval of the linear array structured light camera.
Let the basic parameters of the motion mechanism be: step angle a, subdivision b, number of pulses c1Lead P of the lead screw, pitch O of the ball screw, number m of pitch heads, and linear displacement of one pulse of the motor S4Distance S5Distance S traveled by each pulse6At time t of the movement mechanism1The distance traveled inside is S1I.e. the number of pulses c required for the motor to rotate 360 deg1The calculation formula of (2) is as follows:
Figure BDA0003216203550000113
the calculation formula of the ball screw lead P is: p is O × m (20)
C is required to be received within a distance of P1One pulse, the distance S traveled per pulse period6Comprises the following steps:
Figure BDA0003216203550000121
number of pulses c required for movement of 1mm2Comprises the following steps:
Figure BDA0003216203550000122
the motion control card for controlling the motion mechanism is set to a pulse equivalent programming mode, and 1mm is set to c2pulse with driving speed set to v1mm/s, the number of pulses c sent in one second3:c3=d×c2(23)
Electric slide table at time t1Inner distance S1I.e., the sampling interval, corresponds to the Y-axis movement distance for one sampling.
And in order to reduce distortion interference, the distortion of the data fusion model is calculated and compensated.
Establishing a linear structure optical camera coordinate system O-XYZ and a three-dimensional world coordinate system O1-X1Y1Z1As shown in FIG. 8, the former is regarded as a local coordinate system, and the data reconstruction method is based on the local coordinate systemDerivation of O-XYZ; the latter as a compensated coordinate system, O1-X1Y1Z1The space included angle formed by the spherical surface and the O-XYZ is thetaxyzAnd the Y-axis direction is the moving direction of the motion mechanism, and no deflection angle error exists, namely the Y-axis data of the local coordinate system is the Y-axis data of the compensation coordinate system without compensation.
Translating the compensation coordinate system to the center of a circle O1Coinciding with the center O of the camera coordinate system and forming a spatial angle thetaxyzSequentially projected to XOY, XOZ and YOZ planes as shown in FIG. 9, X1The projection of the axis to the XOY plane forms an included angle thetaz,X1And Z1The axis projection forms an included angle theta to the XOZ planey,Z1The axis projection forms an included angle theta to the YOZ planexAnd the existence of the space included angle causes the deviation of the three-dimensional surface information of the object to be measured, and a distortion compensation model needs to be established to reduce errors.
Let the three-dimensional coordinate information of any point k under the camera coordinate system be (x)k,yk,zk) Due to the angle of space thetaxyzThe point k has an actual three-dimensional coordinate of (x)k',yk',zk') as shown in fig. 10, (a) is a local coordinate system of the camera, and (b) is a compensation coordinate system, wherein a cuboid in the coordinate system represents an object to be measured, and the scanning range of the linear array structured light is d3After distortion compensation, the scanning range in the compensation coordinate system is d3' actual three-dimensional data information of the object is restored through the distortion compensation model.
The Y-axis direction is the moving direction of the moving mechanism, and the Z can be obtained by the depth information acquisition and calculation based on the triangular distance measurementk' and ZkThe corresponding relationship of (1).
The line structured light camera and the section bar bracket have installation deflection when being installed, which belongs to system errors, and as shown in fig. 11, a certain deflection angle between the camera and the horizontal plane causes distortion of position information received by the photosensitive element, thereby affecting object measurement accuracy. Z1The axis is projected to the YOZ plane to form an included angle thetax,XkAnd ZkIs the position information of X axis and Z axis measured in the camera coordinate system,
Figure BDA0003216203550000131
and
Figure BDA0003216203550000132
for compensated position information, based on thetaxThe distortion compensation formula of (a) is:
Figure BDA0003216203550000133
Figure BDA0003216203550000134
the compensation included angle theta can be obtainedxAnd the coordinate value of any point k under the post-compensation coordinate system.
X1And Z1The included angle between the axis projection and the coordinate axis is thetayResulting in object depth information and linear array mechanism optical sampling point spacing d3A deviation occurs, as shown in fig. 12, blue is the Y-axis motion direction,
Figure BDA0003216203550000135
to compensate for thetayThe calculation formula of the depth information in the rear Z-axis direction is as follows:
Figure BDA0003216203550000136
obtaining a compensation thetayThe latter position information.
X1The axes forming an angle theta with the XOY planezAs shown in fig. 13, the angle θzThe existence of the change of the sampling point interval can also cause the change of the sampling point interval, and the compensated sampling point interval is set as
Figure BDA0003216203550000137
The calculation formula is as follows:
Figure BDA0003216203550000138
from equation (18), the data fusion model derives the u-th scan, and the coordinate information of the k-th point at w samples is:
Figure BDA0003216203550000141
namely:
Figure BDA0003216203550000142
after distortion compensation is carried out on the k coordinate information of any point, the calculation formula is as follows:
Figure BDA0003216203550000143
the three-dimensional position information of any point after the distortion compensation is completed can be obtained according to the formula (30).
Before an object to be measured is measured, a space included angle theta of distortion compensation needs to be determinedxyz. Angle of space thetaxyzThe method comprises the steps of obtaining a standard object with a known determined size and one or more steps through measurement and comparison, observing the standard object with the determined size, generating three-dimensional point cloud data by a reconstruction model of original data, generating distortion compared with an original part, and obtaining the three-dimensional point cloud data with the length L2If the data is missing, the spatial included angle needs to be calculated, and the distortion compensation of the point cloud data is completed.
Let the standard object have a height difference L1The different surfaces of the line structured light camera, namely the steps, the photosensitive elements cannot obtain the depth information from the length L due to the height difference2Partial plane data acquisition, data loss, known vertical incidence angle of the selected line structured light camera, and theta calculation by trigonometric relation as shown in FIG. 14xThe specific numerical value of (1).
The standard vertical incidence angle of the linear array structure optical camera selected for the test is x, thetaxAnd theta1The calculation formula of (2) is as follows:
θx=x-θ1 (31)
Figure BDA0003216203550000151
selecting the linear array structured light to scan the object plane data line at any time, and ensuring that the theoretically acquired data is the same plane height, as shown in fig. 15, assuming that the range of the linear structured camera is L4Calculating the depth information difference between the starting point and the ending point of the data line as L3Through L4And L3Calculating the deflection angle thetay
Figure BDA0003216203550000152
When the standard object is scanned in several times, the deflection angle theta is usedzThe deflection phenomenon that the edges of the two times of scanning data are not in the same straight line exists, and the standard measuring range L of one time of scanning is taken5As shown in FIG. 16, the actual length L is calculated from the point cloud data6Substituted into a base on θzCalculating theta by the distortion compensation formulazThe value is obtained.
Distortion compensation is carried out on the point cloud data of the object, the robustness of the data is improved, the influence of a scanning angle and a scanning distance on the point cloud data is eliminated, the integral model information of the object to be detected is accurately reflected, the establishment of the topological relation of the point cloud data of the part and the subsequent three-dimensional reconstruction are facilitated, and the requirement of part detection application in a three-dimensional scene is met.
In order to verify the feasibility of the point cloud data reconstruction method of the invention, we performed the following tests:
1. test platform construction
And (4) setting up a high-precision object detection test platform to complete point cloud data acquisition. The selected hardware equipment comprises a Keynes LJ-V7060 line laser scanner, a Zolix electric sliding table, a stepping motor and a motion control module (a driver and a motion control card). The linear structure light camera is fixed, the object moves on the electric sliding table at a constant speed driven by the stepping motor, the position relation between the linear laser scanner and the part is adjusted by the rotating lead screw, the object is ensured to be in the whole range of the linear laser scanner, the sensor can receive the surface information of the object more stably, the distance measurement error is reduced, the precision of point cloud collection is increased, and the linear structure light camera has more measurement advantages compared with the existing scanning mode.
2. Test of
And collecting depth information of the on-line structured light camera of the object to be measured, substituting the depth information of the Z axis into the data fusion model to correspond to the information of the X axis and the Y axis, and finishing point cloud data reconstruction. After reconstruction is completed, the space included angle of distortion compensation needs to be determined, the space included angle is obtained by measuring and comparing objects with a certain size, a 3D double-layer hole printing piece is selected for testing, as shown in fig. 17, (a) an original piece of the double-layer hole printing piece is printed, (b) three-dimensional point cloud data generated by data fusion is set to be 4 times by a line structure optical camera, the scanning times are represented by different colors of blue, green, yellow and red, and the three-dimensional point cloud data generated by the double-layer hole piece through an original data fusion model is observed to be distorted compared with an original part and has the length L2If the data is missing, the spatial included angle needs to be calculated, and the distortion compensation of the point cloud data is completed.
The bottom base of the double-layer hole piece has a height difference L with the spherical hole surface1,L1The value is 5mm, and as shown in fig. 18, the light-sensing element has a height difference, so that the linear structured light camera cannot obtain depth information from the length L2And acquiring data by partial planes to form data loss.
The standard vertical incidence angle of the linear structure light camera selected in the test is 35 degrees and thetaxAnd theta1The calculation formula of (2) is as follows:
Figure BDA0003216203550000161
any point m on the edge of the vertical projection base1To the upper plane of the hole piece to obtain a point m2,m2Making the perpendicular line intersect with the plane of the hole member at a point m3Calculate m2To m3To obtain the difference L of the upper and lower boundaries of the point cloud data2。L2The value is 3.01mm, and the calculation is carried out by substituting the value into a formulaObtaining:
Figure BDA0003216203550000162
selecting a planar data line of the hole piece scanned by the line structured light at any time to ensure that the theoretically acquired data is the same planar height, as shown in fig. 19(a), assuming that the range of the camera is L4Calculating the depth information difference between the starting point and the ending point of the data line as L3Through L4And L3Calculating the deflection angle thetay
Figure BDA0003216203550000163
The scanning range of the line structured light camera is 16mm, a data line at any moment is selected, and the depth information difference value of the data line is calculated to be 12 multiplied by 10-3mm, calculated by substituting the formula:
Figure BDA0003216203550000171
the size of the double-layer hole piece is 50mm multiplied by 50mm, and the scanning is required to be carried out in four times when the light path range of the line structure is exceeded. When the double-layer hole piece is scanned in multiple times, the deflection angle theta is causedzThere is a deflection phenomenon that the edges of the two scanned data are not in the same straight line, and L is taken as shown in FIG. 19(b)5Calculating L from the point cloud data for a scanning range value of 16mm616.006mm, then:
Figure BDA0003216203550000172
distortion compensation is carried out on the double-layer hole piece data, three-dimensional point cloud data are generated again, as shown in fig. 20, the interference of distortion on the shape is reduced, the whole model information of the object to be detected is accurately reflected, and the precision of data fusion can be improved by verifying a distortion compensation model.
Comparing the sizes before and after the distortion compensation of the double-layer hole member, including the measured values of the diameters of the upper layer hole and the lower layer hole and the length and the width of the hole surface, as shown in the table 1, measuring the deviation between the measured size and the standard size by using Root Mean Square, and calculatingThe formula is as follows:
Figure BDA0003216203550000173
TABLE 1 comparison of the dimensions before and after Compensation of the double-layer hole piece
Figure BDA0003216203550000174
The two-layer hole piece was subjected to 5 dimensional measurement tests, i.e. n-5, di' is the actual measurement value, diThe diameter of the hole is obtained by the space distance between two points intersected by the edge of the circular hole and a straight line passing through the center of the circle.
The mean square error of the diameters of the large and small holes in the double-layer space is larger because the selected line segment passing through the circle center has larger randomness, and the coordinate measurement of the circle center has certain deviation so that the root mean square error is increased; and the length and the width are selected as the end points of the edge of the hole surface, and the relative deviation of the hole surface is small. As can be seen from Table 1, the distortion compensated dual layer orifice has a larger correction than before, and the root mean square error RMS of the orifice face length and widtherrorAnd verifying the effectiveness of the distortion compensation model on reducing the point cloud data acquisition error within 0.02 mm.
The universality of the data fusion and compensation method is verified, the test object selects common instrument parts, double-layer hole parts and grid electrode assemblies in the industrial field, and the method has the characteristics of most parts and has obvious characteristics of planes, curved surfaces, step surfaces, space round holes and the like. As shown in fig. 21, under normal lighting conditions, the line structured light camera was used to complete the test object data acquisition.
3. Analysis of test results
The obtained Point cloud data is different from the Point cloud data view converted by the CAD model, an Iterative Closest Point method (Iterative Closest Point ICP) is adopted to identify and select key Point pairs with obvious characteristics, characteristic descriptors of the key Point pairs are calculated, translation and rotation parameters are found, the directions and relative positions of the different Point cloud data in a global coordinate frame are estimated according to the similarity of the characteristic descriptors, and the superposition of intersecting areas of the data sets is completed through translation and rotation coordinate systems. Fig. 22 is a diagram showing the effect of fine registration of the line structured light point cloud data and the CAD model, where blue represents the line structured light data and cyan represents the original CAD model point cloud data.
Root Mean square (rms) (root Mean square) is an error Mean value of relative distances between the collected point cloud data after registration and all corresponding points of the point cloud of the original CAD model, and can be used as an evaluation index for measuring the accuracy of the fused point cloud model, and the calculation formula is as follows:
Figure BDA0003216203550000181
wherein RMSerrorN represents the number of corresponding points in the point cloud registration as the mean of the root mean square errors of all the point pairs.
RMSiFor the root mean square error of any corresponding point pair i, the calculation formula is as follows:
Figure BDA0003216203550000182
aiming at more accurately measuring the precision of point cloud data generated by fusion deduction, 20 pairs of points are randomly selected from the point cloud data of a test object and the original CAD model data for distance calculation, the distance of the nearest adjacent point of each point of the point cloud data to the CAD model data is calculated to represent a deviation value (mm), the result is shown as a graph 23-25, the deviation values of the point pairs taken by an instrument component, a double-layer hole component and a grid component are respectively represented, a black square represents the point cloud deviation value before compensation, a red circle represents the point pair deviation value after compensation, the latter deviation value has higher distribution concentration degree and small dispersion range compared with the former deviation value, namely, the difference between the expected data value and the real data value of the adjacent point is smaller, and the morphological characteristics of the reconstructed point cloud data correspond to the original model in a matching mode.
The selected three test objects with obvious characteristics are used for checking the correctness of the data fusion method, the scanned point cloud data and the original model data are compared, the RMS errors before compensation are respectively reduced by 0.009mm,0.036mm and 0.024mm after data fusion compensation, a registration graph and a deviation value are compared, no obvious visual error exists between the point cloud model generated by compensation and the original model, the matching degree of the pose of the point cloud model and the pose of the original model is high, and the universality of the data fusion compensation method on different test objects is verified.
Although specific embodiments of the present invention have been described above, it will be appreciated by those skilled in the art that these are merely examples and that many variations or modifications may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is therefore defined by the appended claims.

Claims (6)

1. The utility model provides a high accuracy point cloud data reconstruction method, includes testing platform be provided with motion and support on the testing platform, motion is used for driving and waits to examine the object and follow/around X axle and Y axle motion, be provided with line structure light detection module on the support, the object is waited to examine to the laser beam vertical shot that line structure light detection module transmitted, accomplishes the point cloud data acquisition to the object that awaits measuring, its characterized in that: the method comprises the steps of taking a sampling period as a unit, obtaining a Z-axis coordinate of a point to be measured in combination with a measurement range of a line structure light detection module in the Z-axis direction according to a similar triangle principle, obtaining an X-axis coordinate of the point to be measured in combination with the measurement range of the line structure light detection module in the X-axis direction by using an arithmetic progression formula, obtaining a Y-axis coordinate of the point to be measured in combination with the times of the sampling period according to physical parameters of the line structure light detection module, reconstructing three-dimensional point cloud data, and finally performing compensation adjustment on the reconstructed three-dimensional point cloud data according to a space included angle between a camera coordinate system of the line structure light detection module and world geographic coordinates to obtain final reconstructed data.
2. The method for reconstructing high-precision point cloud data according to claim 1, wherein: the limit of the moving distance of the moving mechanism driving the object to be measured to move along the Y axis is recorded as one scanning, the line-structured light detection module performs W times of sampling during one scanning period, R times of scanning is required to be performed totally, data acquisition of the object to be measured is completed, and when the W times of sampling is performed in the first scanning, the coordinate information of the ith point to be measured is as follows:
Figure FDA0003216203540000011
in the u-th scanning, the coordinate information of the ith point in the w-th sampling is as follows:
Figure FDA0003216203540000012
wherein D is3Indicating the measuring range of the line-structured light detecting module in the X-axis direction, n2Denotes the number of points to be measured, n, which are uniformly distributed in the measuring range in the X-axis direction1Representing the total number of points to be tested, t1Denotes the sampling period, v1Showing the motion speed of the motion mechanism along the Y axis, delta h showing the measurement range of the line structure light detection module in the Z axis direction, h3And h represents the measuring distance of the point to be measured of the linear structure light detection module in the measuring range of the Z-axis direction corresponding to the photosensitive element.
3. The method for reconstructing high-precision point cloud data according to claim 2, wherein: the motion mechanism drives the object to be measured to perform one-time scanning along the Y axis and then move along the X axis by the motion mechanism3And (4) performing distance, performing second scanning along the Y axis, and so on, performing R times of scanning, and completing data acquisition on the object to be detected.
4. The method for reconstructing high-precision point cloud data according to claim 2, wherein:
Figure FDA0003216203540000021
wherein h is1And f represents the focal length of the camera in the line structured light detection module.
5. The method for reconstructing high-precision point cloud data according to claim 2, wherein: the camera coordinate system of the line-marking structure light detection module is O-XYZ, and the world coordinate system is O1-X1Y1Z1The space included angle between the two is thetaxyzProjecting them to XOY, XOZ and YOZ planes in turn, then X1The projection of the axis to the XOY plane forms an included angle thetaz,X1Axis and Z1The axis projection forms an included angle theta to the XOZ planey,Z1The axis projection forms an included angle theta to the YOZ planexThe reconstructed three-dimensional point cloud data is compensated and adjusted by adopting the following formula,
Figure FDA0003216203540000022
6. the method for reconstructing high-precision point cloud data according to claim 5, wherein: placing an object with steps and known size on a motion mechanism, collecting point cloud data by using a line structured light detection module, and calculating an included angle theta according to the result of point cloud data collection and the actual size of the objectx、θy、θz
CN202110944202.1A 2021-08-17 2021-08-17 High-precision point cloud data reconstruction method Pending CN113888693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110944202.1A CN113888693A (en) 2021-08-17 2021-08-17 High-precision point cloud data reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110944202.1A CN113888693A (en) 2021-08-17 2021-08-17 High-precision point cloud data reconstruction method

Publications (1)

Publication Number Publication Date
CN113888693A true CN113888693A (en) 2022-01-04

Family

ID=79010720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110944202.1A Pending CN113888693A (en) 2021-08-17 2021-08-17 High-precision point cloud data reconstruction method

Country Status (1)

Country Link
CN (1) CN113888693A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116659419A (en) * 2023-07-28 2023-08-29 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116659419A (en) * 2023-07-28 2023-08-29 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method
CN116659419B (en) * 2023-07-28 2023-10-20 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method

Similar Documents

Publication Publication Date Title
Zexiao et al. Complete 3D measurement in reverse engineering using a multi-probe system
US7372558B2 (en) Method and system for visualizing surface errors
CN109341546B (en) Light beam calibration method of point laser displacement sensor at any installation pose
CN109029293B (en) Method for calibrating position and pose errors of line scanning measuring head in blade surface type detection
CN105157606B (en) Contactless complicated optical surface profile high precision three-dimensional measurement method and measurement apparatus
US7593117B2 (en) Apparatus and methods for measuring workpieces
CN110208771B (en) Point cloud intensity correction method of mobile two-dimensional laser radar
JPH03503680A (en) Optoelectronic angle measurement system
US7905031B1 (en) Process for measuring a part
CN108311545B (en) Y-type rolling mill continuous rolling centering and hole pattern detection system and method
CN102288131A (en) Adaptive stripe measurement device of 360-degree contour error of object and method thereof
CN109724532B (en) Accurate testing device and method for geometric parameters of complex optical curved surface
CN109141273B (en) DMD-based high-speed moving target deformation measurement system and method
Contri et al. Quality of 3D digitised points obtained with non-contact optical sensors
CN115112049A (en) Three-dimensional shape line structured light precision rotation measurement method, system and device
CN111707450A (en) Device and method for detecting position relation between optical lens focal plane and mechanical mounting surface
CN113888693A (en) High-precision point cloud data reconstruction method
WO2007001327A2 (en) Apparatus and methods for scanning conoscopic holography measurements
CN112581524A (en) Point cloud-based SLAM mobile robot airport road detection data acquisition method
CN115218792A (en) Method and device for measuring spindle rotation error based on optical principle
CN115685164A (en) Three-dimensional laser imager working parameter testing system and method
CN110887638B (en) Device and method for drawing image plane of optical system
CN114485462A (en) Vehicle contour detection system and method for rail transit
CN113894399A (en) Non-contact detection system for space state of resistance spot welding electrode
CN107063085A (en) A kind of automobile lamp mask dimension measuring device based on light scanning lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination