CN113324478A  Center extraction method of line structured light and threedimensional measurement method of forge piece  Google Patents
Center extraction method of line structured light and threedimensional measurement method of forge piece Download PDFInfo
 Publication number
 CN113324478A CN113324478A CN202110652167.6A CN202110652167A CN113324478A CN 113324478 A CN113324478 A CN 113324478A CN 202110652167 A CN202110652167 A CN 202110652167A CN 113324478 A CN113324478 A CN 113324478A
 Authority
 CN
 China
 Prior art keywords
 point
 center
 light
 pixel
 central point
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
 238000000605 extraction Methods 0.000 title claims abstract description 53
 238000000691 measurement method Methods 0.000 title claims abstract description 9
 238000000034 method Methods 0.000 claims abstract description 104
 230000005484 gravity Effects 0.000 claims abstract description 28
 230000011218 segmentation Effects 0.000 claims abstract description 16
 238000001914 filtration Methods 0.000 claims abstract description 10
 238000005259 measurement Methods 0.000 claims description 54
 239000011159 matrix material Substances 0.000 claims description 42
 238000012545 processing Methods 0.000 claims description 20
 230000003287 optical effect Effects 0.000 claims description 12
 239000013598 vector Substances 0.000 claims description 6
 230000001186 cumulative effect Effects 0.000 claims description 2
 230000008901 benefit Effects 0.000 abstract description 9
 238000001514 detection method Methods 0.000 abstract description 6
 238000005242 forging Methods 0.000 abstract description 4
 238000004422 calculation algorithm Methods 0.000 description 38
 238000009826 distribution Methods 0.000 description 19
 230000009466 transformation Effects 0.000 description 19
 230000000694 effects Effects 0.000 description 17
 238000010586 diagram Methods 0.000 description 13
 238000006243 chemical reaction Methods 0.000 description 12
 230000000007 visual effect Effects 0.000 description 10
 230000008569 process Effects 0.000 description 9
 238000004364 calculation method Methods 0.000 description 6
 230000033001 locomotion Effects 0.000 description 6
 238000007781 preprocessing Methods 0.000 description 6
 238000013519 translation Methods 0.000 description 6
 238000003384 imaging method Methods 0.000 description 5
 238000009616 inductively coupled plasma Methods 0.000 description 5
 239000012636 effector Substances 0.000 description 4
 238000002347 injection Methods 0.000 description 4
 239000007924 injection Substances 0.000 description 4
 230000008859 change Effects 0.000 description 3
 238000012217 deletion Methods 0.000 description 3
 230000037430 deletion Effects 0.000 description 3
 238000006073 displacement reaction Methods 0.000 description 3
 238000005516 engineering process Methods 0.000 description 3
 239000003550 marker Substances 0.000 description 3
 238000013178 mathematical model Methods 0.000 description 3
 238000011160 research Methods 0.000 description 3
 230000003746 surface roughness Effects 0.000 description 3
 230000003044 adaptive effect Effects 0.000 description 2
 238000005286 illumination Methods 0.000 description 2
 230000006872 improvement Effects 0.000 description 2
 238000009434 installation Methods 0.000 description 2
 230000001788 irregular Effects 0.000 description 2
 238000003754 machining Methods 0.000 description 2
 238000013507 mapping Methods 0.000 description 2
 239000000463 material Substances 0.000 description 2
 230000007246 mechanism Effects 0.000 description 2
 238000005457 optimization Methods 0.000 description 2
 238000005070 sampling Methods 0.000 description 2
 238000012795 verification Methods 0.000 description 2
 238000012935 Averaging Methods 0.000 description 1
 238000004458 analytical method Methods 0.000 description 1
 238000013459 approach Methods 0.000 description 1
 230000001174 ascending effect Effects 0.000 description 1
 238000004891 communication Methods 0.000 description 1
 238000007796 conventional method Methods 0.000 description 1
 238000000354 decomposition reaction Methods 0.000 description 1
 230000007423 decrease Effects 0.000 description 1
 230000007547 defect Effects 0.000 description 1
 238000013461 design Methods 0.000 description 1
 238000011161 development Methods 0.000 description 1
 230000018109 developmental process Effects 0.000 description 1
 238000005315 distribution function Methods 0.000 description 1
 230000007613 environmental effect Effects 0.000 description 1
 238000002474 experimental method Methods 0.000 description 1
 230000004907 flux Effects 0.000 description 1
 238000009499 grossing Methods 0.000 description 1
 238000003709 image segmentation Methods 0.000 description 1
 230000036039 immunity Effects 0.000 description 1
 238000004519 manufacturing process Methods 0.000 description 1
 238000012986 modification Methods 0.000 description 1
 230000004048 modification Effects 0.000 description 1
 230000000877 morphologic effect Effects 0.000 description 1
 230000010355 oscillation Effects 0.000 description 1
 238000012805 postprocessing Methods 0.000 description 1
 230000036544 posture Effects 0.000 description 1
 238000003672 processing method Methods 0.000 description 1
 230000009467 reduction Effects 0.000 description 1
 230000001105 regulatory effect Effects 0.000 description 1
 230000008439 repair process Effects 0.000 description 1
 230000000087 stabilizing effect Effects 0.000 description 1
 238000007619 statistical method Methods 0.000 description 1
 230000001629 suppression Effects 0.000 description 1
 238000004441 surface measurement Methods 0.000 description 1
 238000012360 testing method Methods 0.000 description 1
 238000012876 topography Methods 0.000 description 1
Images
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical techniques
 G01B11/002—Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

 G—PHYSICS
 G01—MEASURING; TESTING
 G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
 G01B11/00—Measuring arrangements characterised by the use of optical techniques
 G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
 G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
 G01B11/254—Projection of a pattern, viewing through a pattern, e.g. moiré
Abstract
The invention discloses a center extraction method of line structured light and a threedimensional measurement method of a forge piece, wherein the center extraction method of the line structured light comprises the following steps of firstly carrying out filtering and threshold segmentation on an acquired image with line structured light stripes, segmenting light stripe characteristics of the line structured light, then extracting an initial central point p (x, y) of the line structured light stripes, determining the normal direction of each initial central point by adopting a direction template method by taking the initial central point p (x, y) as a central point, and extracting an accurate central point by adopting a gray scale gravity center method in the normal direction of each initial central point. The method for extracting the center of line structured light has the advantages of high precision, good stability and the like, and the method for measuring the threedimensional forging has the advantages of high detection precision and the like.
Description
Technical Field
The invention relates to the technical field of detection and measurement, in particular to a center extraction method of line structured light and a threedimensional measurement method of a forge piece.
Background
With the development of the modern manufacturing level, a large number of irregular curved surfaces in a product need to be measured in three dimensions for detection and verification. The threedimensional size of the forged piece is an important index for measuring the machining quality, so that the accurate measurement of the threedimensional size of the forged piece not only can evaluate whether the quality of the forged piece is qualified, but also provides important feedback information for later design improvement and promotes the optimization of the machining process. The traditional laser scanner and the threecoordinate measuring instrument are expensive. The binocular camera threedimensional measurement is affected by the working environment, so that the measurement stability is poor.
The line structured light vision measurement is noncontact measurement combining laser scanning and vision processing technology, has the advantages of high measurement precision, strong stability, realtime performance and universality, and becomes a research hotspot in solving the threedimensional measurement problem at home and abroad. Many scholars accurately extract the subpixel coordinates of the light stripes by improving the line structure light stripe center extraction algorithm, and then improve the threedimensional measurement precision.
However, after the line structured light is projected onto the surface of the object to be measured, a certain width exists, which causes measurement errors, and how to accurately extract the center of the line structured light becomes a problem to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a highprecision and goodstability line structured light extraction method and a threedimensional forge piece measurement method for improving detection precision.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for extracting the center of line structured light is characterized by comprising the following steps: firstly, filtering and threshold segmentation are carried out on the collected image with the linear light stripes of the linear structure, and the light stripe characteristics of the linear structure light are segmented; carrying out binarization processing on each pixel point in the optical strip characteristics, and preliminarily extracting an initial central point p (u, v); taking the extracted initial central points as centers, and determining the normal direction of each initial central point by adopting a direction template method; and (4) performing midpoint extraction in the normal direction of each initial central point by adopting a gray scale gravity center method, and extracting an accurate central point.
Further, the step of extracting the initial central point p (u, v) is as follows:
carrying out binarization processing on each pixel point in the light bar characteristics, comparing the gray value of each pixel point with a threshold value T extracted from the light bar characteristics, and marking the pixel points larger than the threshold value T as 1 or marking the pixel points as 0; scanning each pixel point P1 point by point, if the pixel point P1 is marked as 1, 8 pixel points P adjacent to the pixel point P1 are scanned_{2},P_{3},...,P_{9}The pixel point P is judged as follows_{2},P_{3},...,P_{9}Set gradually along the clockwise of pixel P1, if satisfy the following equation:
deleting the pixel point P1, and extracting the reserved pixel point P1 as an initial central point P (u, v), wherein N (P) is_{1}) Is P_{2},P_{3},...,P_{9}Number marked 1, S (P)_{1}) Represents P_{2},P_{3},...,P_{9},P_{2}The cumulative number of times the tags in the sequence satisfy the 01 ordering.
Further, the step of determining the normal direction of the initial center point is as follows:
taking the extracted initial central point as a center, and performing the following operation on the gray level image of the line structured light strip:
in the formula, T_{k}Setting k as 1,2,3 and 4, setting m as the total row number of the template matrix, setting n as the total column number of the template matrix, and setting u and v as the abscissa and the ordinate of the initial central point p in the pixel coordinate system respectively; c (u, v) represents the pixel gray value of the initial center point, T_{k}(i, j) represents the value of i row and j column on the corresponding template matrixI and j are the number of rows and columns, respectively, wherein,
T_{1}、T_{2}、T_{3}and T_{4}Respectively representing template matrixes which are vertical, horizontal, 45 degrees inclined to the left and 45 degrees inclined to the right with the light stripes, and selecting H for each initial central point_{k}And (u, v) taking the trend of the template matrix corresponding to the maximum value in the (u, v) as the normal direction of the initial central point.
Further, after the normal direction of the initial central point is determined, the following steps are adopted to extract an accurate central point:
searching each initial central point in the normal direction until the edge of the light strip is detected, calculating the distance d between each pixel point and the initial central point in the normal direction of the initial central point of the light strip, taking the gray value of each point in the normal direction as a weight value, and extracting the central point by using a gray gravity center method, wherein the offset distance between the extracted p' point and the preextracted p point is as follows:
wherein, L is the pixel width of the initial central point p in the normal direction, I (a, b) is the gray value of the pixel point (a, b) in the normal direction, a and b are the row number and column number of the pixel point in the twodimensional matrix of the pixel coordinate system, respectively;
the exact center coordinate extracted on the normal is p' (u)_{p},v_{p}) Satisfies the following conditions:
in the formula, θ is an angle between a vector in the normal direction and the X coordinate axis.
A threedimensional measurement method for a forged piece is characterized in that during measurement, linear structured light is adopted to scan the surface of the forged piece, and the central coordinate data of each linear structured light is extracted by the central extraction method of the linear structured light as claimed in any one of claims 15 to obtain complete threedimensional point cloud data, so that the measurement is completed.
In conclusion, the line structured light center extraction method has the advantages of high precision, good stability and the like, and the threedimensional measurement method of the forge piece has the advantages of high detection precision and the like.
Drawings
Fig. 1 is a schematic structural diagram of the overall frame layout of the system.
Fig. 2 is a system flow chart of threedimensional measurement.
FIG. 3 is a pixel coordinate system and an image coordinate system.
Fig. 4 is a camera coordinate system.
FIG. 5 is a diagram of a lineplane measurement model.
FIG. 6 is a schematic diagram of a coordinate system relationship of the scanning system.
Fig. 7 is a schematic diagram of direct threedimensional measurement.
Fig. 8 is a schematic diagram of oblique threedimensional measurement.
Fig. 9 is a schematic view of the installation of the camera and the laser projector in this embodiment.
FIG. 10 is a graph of the gray scale profile of an actual light bar cross section
FIG. 11 is a captured picture of a workpiece with light bars.
Fig. 12 is a diagram illustrating a mean filtering process performed on the collected light bar image.
Fig. 13 is a graph showing the effect of using fixed threshold segmentation.
FIG. 14 is a graph of the effect of using adaptive threshold segmentation.
Fig. 15 is a graph of the effect of using OTSU threshold segmentation.
Fig. 16 is a graph showing the effect of removing the spots.
Fig. 17 is a principle of the line structured light stripe center extraction method in the present embodiment.
Fig. 18 is a schematic diagram of 8 neighborhoods of P1.
Fig. 1921 are schematic diagrams of the effect of extracting the light center of the line structure by using the improved algorithm of the present invention.
Fig. 22 to 24 are schematic diagrams illustrating the effect of extracting the light center of the line structure by using the conventional method.
FIG. 25 is a flow chart of point cloud processing.
FIG. 26 is a schematic view of a mark.
Fig. 27 is a diagram of a mathematical model for multiview stitching based on mark points.
Detailed Description
The present invention will be described in further detail with reference to examples.
In the specific implementation: a threedimensional measurement method for a forge piece based on line structure light is characterized in that a threedimensional measurement system is firstly constructed, and since the line structure light can only measure threedimensional information of one section of the surface of a measured object at each time, in order to realize integral scanning of the forge piece, external driving equipment is required to drive a measurement device to scan a plurality of positions of the forge piece. The robot has the advantages of flexible operation, increased measuring range and no environmental limitation, and the robot is used as a mobile carrier to drive the line structure optical vision measuring equipment to realize the scanning of the system. As shown in fig. 1, the system is mainly divided into a hardware module and an algorithm module. The hardware module is composed of a line structured light projector, a robot, a camera and a computer, and the algorithm module is mainly composed of a calibration module and an image processing module.
In the measuring system, a hardware module consists of scanning measuring equipment and a moving mechanism, and the scanning measuring system consisting of a camera and a laser transmitter is fixed on an end effector of a robot at a certain angle and position. The robot is mainly connected with a computer through control cabinet TCP/IP communication, the motion of the mechanical arm is controlled, and a demonstrator is used for checking parameter information and motion state of the robot.
The calibration module in the algorithm module comprises camera parameter calibration, line structured light calibration and handeye relationship calibration, and various parameters in the scanning system are obtained through the calibration, and a model is established. The image processing module comprises image acquisition, preprocessing, fringe center extraction, coordinate conversion, point cloud processing and point cloud display. As shown in fig. 2, the system flow of threedimensional measurement includes the following specific operation steps: firstly, calibrating an installed line structure light vision measuring system, then driving a line structure light vision measuring device by a mechanical arm to scan a forge piece, continuously collecting images through a camera, extracting central subpixel coordinates of light bars after image processing, and finally completing conversion from twodimensional central point coordinates of the light bars to threedimensional physical coordinates by using calibrated parameters, so as to realize acquisition of threedimensional point cloud data of the forge piece and further complete threedimensional measurement.
Camera imaging model
As shown in fig. 3, a coordinate system is established on the imaging plane of the camera, namely an image coordinate system and a pixel coordinate system. The pixel coordinate system holds the coordinates (u, v) in the form of a twodimensional array, each coordinate not representing a real size but representing the position of the coordinate in the array in units of pixels. The x axis and the y axis in the image coordinate system are respectively parallel to the u axis and the v axis, and the physical image is described through the angle of the pixel, so that the conversion of the twodimensional pixel coordinate and the physical size is realized.
In both coordinate systems, the relationship between arbitrary points in the image is converted to the following equation.
In the formula, dx and dy represent physical lengths in the xaxis and the yaxis, xo_{0}The origin of the y coordinate system is represented as (u) in the uov coordinate system_{0},v_{0})。
As shown in FIG. 4, with the optical center position O_{c}For the origin, establish the overorigin O_{c}And parallel to the X, y axes of the image coordinate system_{c}And Y_{c}And obtaining a camera coordinate system. And Z of the camera coordinate system_{c}Origin O of the axis image coordinate system_{1}Then the focal length f of the camera can be O_{1}And O_{c}The distance between them is indicated. P (X) in the camera coordinate system_{c},Y_{c},Z_{c}) The following relationship can be expressed:
the world coordinate system is used for describing the mapping relation between the image and the measured object, and is used as the absolute coordinate system of the system, and the coordinate system is the size mapping between the object and the image. Therefore, the camera coordinate can be converted into the world coordinate through rotation and translation, the conversion between the camera coordinate and the world coordinate belongs to rigid body conversion, and no deformation phenomenon occurs. Let a point P on the surface of the measured object have a world coordinate system with a coordinate of (X)_{w},Y_{w},Z_{w}) The corresponding coordinate in the camera coordinate system is (X)_{c},Y_{c},Z_{c}). The relationship between the rotation matrix R and the translation matrix T can be obtained by converting the two matrices as follows:
r and T in the above formula are external parameters of the camera, independent of the camera system itself. R is an orthogonal rotation matrix of 3 × 3, representing the direction of the camera; t is a 3 × 1 matrix representing the translational position of the camera. Then through a homogeneous coordinate transformation:
the line structured light vision measuring technique is that after a laser projector emits laser, a light beam is intersected with the surface of an object to form a bright stripe. The position of the collected light stripe image on the camera imaging plane can also be changed because the light stripes are modulated at different positions due to the difference of the surface topography and the height of the object. The camera collects the stripe image after modulation distortion from a certain angle, and the position between the camera and the laser projector is obtained, so that the threedimensional coordinates of the stripe can be calculated, and the threedimensional information of the surface of the object can be obtained. The line structured light threedimensional measurement mathematical model can be generally divided into an analytic geometric model and a perspective projection model:
and (6) analyzing the geometric model. The method needs to acquire geometric parameters such as an included angle and a position between a camera and a laser projector when the camera and the laser projector are installed, and an analytic geometric model is established according to the geometric parameters. The surface coordinates of the space object can be obtained through the model, and the threedimensional contour of the object is recovered. However, many assumptions exist for establishing the model, such as the required angle between the light plane and the camera, the angle of the optical axis of the camera, and other constraints. The use of this method requires precise adjustment of the position of the camera and the laser projector in order to obtain high precision geometric parameters such as angle and position. Due to the complex process and high requirement, the application range of the analytic geometric model is limited.
Perspective projection model. And solving the linear structure light plane equation or the structural parameters of the sensor. The method is convenient and suitable for many occasions and has wide application. The method of model building can be divided into: the method comprises the steps of firstly, utilizing a linesurface model to form rays by projecting a light plane on a measured object, and determining spatial threedimensional information of light stripes by the laser plane to obtain the morphological characteristics of the measured object. The surface model is that the laser projector emits special light stripe to form the information of the surface of the measured object, the special light stripe is imaged on the camera through collection, and the measurement model between the camera imaging surface and the light plane is established.
The physical relation between the camera and the laser projector needs to be accurately obtained when the geometric model is analyzed, and the constraint equation for establishing the geometric model according to the distance and the included angle is difficult to operate and low in feasibility. Therefore, a line and surface model with better applicability is used to build the line and surface measurement model, as shown in fig. 5.
And the light plane pi is projected on the surface of the measured object to form a light plane equation. The equation for the plane of light in the camera coordinate system is expressed by:
ax_{c}+by_{c}+cz_{c}+d＝0
if a point P is known to be on the surface of the measured object, the coordinates of the point under the pixel coordinates are (u, v) after the camera is collected, the corresponding homogeneous coordinates are (u, v,1), and the image coordinates after the point is normalized are:
wherein A is the internal reference matrix of the camera.
In the camera coordinate system, if the coordinate of P is (x)_{c},y_{c},z_{c}) Point P and camera optical center O_{c}The straight line connected is represented as:
and obtaining the threedimensional coordinate of the unique determination point P under the camera coordinate system according to the light plane equation under the camera coordinate system and the formula:
if the threedimensional coordinate (x) of the point P in the light plane coordinate system is to be obtained_{L},y_{L},z_{L}) If the rotation matrix R and the translation matrix T are obtained by camera calibration, the conversion relationship can be expressed as:
by the above formula, a threedimensional coordinate uniquely corresponding to the point P can be obtained. Therefore, only the internal and external parameters of the camera and the linear structure light parameters are required to be solved through the linear structure light measurement system based on the perspective projection model.
The linear structured light can only measure the threedimensional information of one section of the surface of the object at each time, so the scanning system combines the linear structured light vision measuring equipment with the robot and realizes the integral scanning of the measured object through the movement of the robot. First, a transformation relationship between coordinate systems needs to be established, so as to realize the coordinate transformation between the linear structure measurement system and the robot, as shown in fig. 6.
The following coordinate systems are included in the figure:
{ C }, camera coordinate system;
{ L }, linear structured light measuring coordinate system;
{ B }, the base coordinate system of the robot;
{ T }. end tool coordinate system of robot;
H_{LB}the structural light measures a conversion matrix between a coordinate system and a robot base coordinate system;
H_{LC}a transformation matrix between a structured light measurement system coordinate system and a camera coordinate system;
H_{CT}a transformation matrix between a camera coordinate system and a robot end tool coordinate system;
H_{TB}a transformation matrix between a robot end tool coordinate system and a robot base coordinate system;
each of the above matrices is a 4 × 4 homogeneous matrix composed of a rotation matrix R and a translation matrix T. The relationship between the transformation matrices can be expressed as:
H_{LB}＝H_{TB}H_{CT}H_{LC}
let the base coordinate system { B } of the robot be defined as the global coordinate system. Obtaining coordinates (x) by perspective projection model_{L},y_{L},z_{L}) The coordinate in the global coordinate system can be converted into (x) by the following formula_{B},y_{B},z_{B})。
In the process of moving measurement of the robot, the camera coordinate system, the robot tool tail end coordinate system and the linear structured light measurement system coordinate system are fixed, so that the relative position does not change, namely H_{LC}And H_{CT}The elements in both matrices are constant and constant. The robot has feedback delay, and the coordinates of the end effector cannot be transmitted to the PC in time, so H is corrected_{TB}Errors can occur when the matrix is calculated in real time, making this approach impractical. The system will set the robot to move at a constant speed from a to B without rotating in the process, and the camera mounted on the end effector continuously collects images at a set frame rate. The distance S that the robot moves per frame is:
in the formula: v is the moving speed and F is the frame rate set by the camera. S represents the amount of displacement of the robot arm in the moving direction for a single frame.
A transformation matrix from the coordinate system of the end effector at the point A to a base coordinate system can be obtained through DH kinematic modeling and is defined as H_{start}The handeye relationship matrix is X. Then a transformation matrix H from the linear structure light coordinate system to the global coordinate system can be obtained_{LB}The method comprises the following steps:
H_{LB}＝H_{start}·X
if the robot arm moves in the Yaxis direction, the position of the ith frame is H_{i}Then, the following relationship exists:
if the point (x) in the linear structured light measurement system coordinate system is obtained in the ith frame_{Li},y_{Li},z_{Li}) The coordinates may be converted to coordinates in a global coordinate systemPoint (x)_{Bi},y_{Bi},z_{Bi})。
The coordinate data under the linear structured light measurement system coordinate system can be converted into the coordinate data under the robot base coordinate system through the formula, and the threedimensional point cloud data of each frame of section can be unified under the same coordinate system through the uniform motion of the robot, so that the scanning of the measured object is realized.
The camera mainly focuses the image generated by the light on the image plane, and commonly used are CMOS and CCD cameras. The difference between the two is mainly the way in which the chip is read, CCD cameras are typically frame exposed, while CMOS cameras are mostly line exposed. The image quality of the CCD camera is higher than that of the CMOS camera, and the contrast is obvious. After considering the economic and cost performance factors, the embodiment chooses to use the Haokawave CE series MVCE01350GM camera, which not only supports hard triggering, but also can acquire images through soft triggering, manually or automatically adjust the gain and exposure,
the lens utilizes optical equipment to present image information on an image sensor of the camera, and the quality of the lens directly influences the quality of image acquisition, thereby influencing the subsequent measurement precision. The following principles need to be noted for type selection:
target surface size of camera sensor: when the lens is selected, the optical size of the lens is larger than the size of the target surface of the camera sensor, otherwise shadow areas appear at four corners of the acquired image, and the image acquisition fails.
Camera interface type: the camera is in line with the interface of the lens.
Size of aperture: and regulating the light flux of the lens in unit time.
Focal length f: the distance from the optical center of the lens to the focusing point of the light beam is selected according to the following formula:
where D represents the working distance, H is the target surface size, and H is the field width. In conclusion, the system finally uses the FA251C type lens.
In a threedimensional measurement system, the light source emitted by the laser projector will directly affect the quality of data acquisition and the effect of postprocessing, thereby affecting threedimensional measurement. For optimal measurement, the laser projector is selected according to the following aspects:
(ii) contrast
Through the projection of the light source, the characteristics of the interested region and the irrelevant region which are irradiated on the measured object need to generate the maximum contrast, thereby being convenient for the later image processing.
Robustness
Robustness means that in a complex environment, better characteristics can still be maintained, namely, the result image is not changed, and the effect is the same as that presented in an experimental environment.
Light source uniformity
Nonuniform illumination causes nonuniform reflection, which affects the stability of the system. Stabilizing a uniform light source stabilizes the captured image results.
After considering the above factors and price, this embodiment decides to use YZ650KB100GD16, and the laser transmitter projects a narrow light band with a specific shape, and has high stability, strong interference immunity, and a suitable price.
The robot is used as a motion mechanism of the system and comprises a mechanical arm, a demonstrator and a control cabinet. The present embodiment uses a erbit robot as a mobile platform for carrying a line structured light vision measuring device, according to laboratory conditions. The robot is an industrial multipurpose robot and has the characteristics of high running speed, high repeated positioning precision, miniaturization and the like.
The threedimensional measurement can be divided into direct projection type and oblique projection type according to different irradiation modes of the laser projector. As shown in fig. 7, which is a direct threedimensional measurement schematic diagram, O is a point on a reference plane of a measured object, P is a corresponding point of O in an imaging plane of a camera, a distance from a center of a camera lens to the point O is a, and a distance from an image point P to the center of the camera lens is b.
As can be seen from FIG. 7, the displacement y of the point to be measured can be expressed as:
FIG. 8 is a schematic view of a diagonal threedimensional measurement, θ_{1}Is the angle theta between the laser beam and the normal of the surface of the object to be measured_{2}The angle between the normal of the surface of the measured object and the optical axis of the receiving lens is the same as the direct projection of the upper graph. Then according to the schematic diagram, the displacement y of the point to be measured can be obtained as follows:
the introduction of the two installation modes shows that the directinjection type threedimensional measurement has smaller measurement blind area due to less shielding in actual use because the incident angle of the directinjection type is smaller than that of the obliqueinjection type. The directinjection threedimensional measurement with simple structure and large measurement range is adopted. After many adjustments, the present embodiment achieves the most appropriate mounting between the camera and the laser projector, as shown in fig. 9.
Line structured light center extraction
When the line laser is used as the measuring equipment, laser beams emitted by the laser projector intersect on the surface of the object to be measured to form a light bar, and the light bar contains the surface position and the contour information of the object to be measured. These light bars have a certain width, also called structured light bars. And acquiring the light strip image of the structured light by the camera to obtain the threedimensional information of the measured object.
The linear light is projected on the surface of an object to form highbrightness light bars, and the gray values of the normal sections of the light bars are in a Gaussian distribution state under ideal conditions, so that the maximum position of the gray values of the light bars is in a central position, gradually decreases towards the left side and the right side and is in a symmetrical distribution state. However, due to the influence of factors such as environment and material of the surface of the workpiece, the gray values of the cross sections of the actually acquired light bar images do not satisfy the gaussian distribution, as shown in fig. 10. Compared with the ideal light bar section gray value distribution curve, the gray value distribution curve of the actual light bar is in a symmetrical distribution, and the gray value distribution curve of the actual light bar is in an irregular shape. It can be seen that the farther from the center of the light bar, the smaller the gray value; the closer to the center of the light bar, the larger the gray value, and the gray value curve distribution follows approximately a gaussian distribution.
In a complex environment, ambient light interference and random noise of a camera cause noise to an image acquired by a measuring system, and light bar information cannot be completely acquired due to the surface roughness of a measured object, material properties and the like, or the acquired image has background information which does not belong to a target object besides the light bar information. This will increase the difficulty of extracting the central stripe, and make the extracted result have a larger error, which will ultimately affect the measurement result.
Line structured light stripe image preprocessing
As shown in fig. 11, the collected image of the workpiece with the light bar has a reduced quality due to the quality of the laser projector and the environment, and the precise extraction of the center of the subsequent light bar is greatly affected. Therefore, in order to reduce these effects, the image is preprocessed using a filtering algorithm, which preserves the detail characteristics of the image and reduces the interference noise. The following are several common filters.
Median filtering is a nonlinear spatial domain filter. And sequentially arranging the gray values of the pixel points in the neighborhood of the template with the size of M multiplied by N by an ascending or descending method, and taking the gray value of the middle pixel point as a new gray value of the center of the template. The median filter has a good noise reduction effect, but can filter out the boundaries of the light bars, so that the extracted fringe center result is inaccurate.
The averaging filter is a common linear filter. A template is designed for a target pixel to be processed of an image, the template is composed of the target pixel and 8 pixels near the target pixel, the average value of all pixels in the template is obtained, and the average value is used for replacing the original pixel value. The mean filter algorithm is simple, the smoothing of the image is obvious, and the noise can be obviously reduced under the condition of not influencing the detail information of the image.
The gaussian filter is used as a linear lowpass filter, and can well suppress gaussian noise. The specific working principle is that each target pixel in a mask scanning processing image is designed, a mask is used for carrying out weighted average on neighborhood pixels, and then the value of a central pixel point of the mask is replaced by a weighted average gray value. The Gaussian filter can effectively filter out spatial oscillation caused by noise, but the boundary can be considered as noise to be removed, so that the edge of the image cannot be well protected, and the edge of the image becomes fuzzy.
Therefore, after each filter is effectively analyzed, the mean filter is selected to process the light bar image by combining the image actually collected by the system, so that the influence of isolated noise can be obviously reduced while the edge of the light bar image is protected. Fig. 12 shows that the collected light bar image is subjected to the mean filtering process.
As can be seen from the above figure, after the mean filtering process, not only the noise outside the fringe area in the image is removed, but also the edges of the light stripes are smoothed, thereby improving the accuracy of extracting the centers of the light stripes.
Stripe image segmentation
Because the acquired image contains the stripes and background information, the center of the stripes is directly extracted, and an accurate result cannot be obtained. Therefore, the light bar information of the image needs to be segmented, and the interference of the background information to the center extraction is eliminated, so that the complexity of the extraction of the center of the light bar is reduced. Meanwhile, the calculation amount is reduced, so that the extraction speed of the light bar center is increased, and the requirement of rapid detection is further met. Several representative threshold segmentations follow.
In the fixed threshold segmentation, a proper threshold is set to segment a background and an image target, the image is divided into two pixel groups which are greater than or equal to the threshold and smaller than the threshold, a gray level range smaller than the threshold is represented by 0 for the target, and a gray level range larger than the threshold is represented by 255 for the background. The algorithm is fast and stable in the operation process, and becomes the most basic segmentation technology which is also the most widely applied.
The local adaptive threshold segmentation algorithm determines the setting of the pixel threshold value according to the pixel values around the target pixel, so that the binarization threshold values at different positions are determined according to the distribution of the surrounding pixel values. In areas with higher brightness, the threshold will typically be high; conversely, in darker areas, the threshold will be set small. Areas with different contrast and brightness present different binarization thresholds at local parts of the image.
The maximum interclass variance method is proposed by Otsu's Provisions in Japan and is also called Otsu's method (OTSU method). The method is characterized in that interclass variance between a background and a target is solved according to a gray level histogram of an image, and therefore an optimal threshold value is selected for segmentation. As follows, the best segmentation effect can be achieved when the reference criterion is maximized.
σ^{2}＝w_{1}(μ_{1}μ_{T})^{2}+w_{2}(μ_{2}μ_{T})^{2}＝w_{1}w_{2}(μ_{2}μ_{T})^{2}
In the above formula: w is a_{1}Is the proportion of the target pixel point in the whole image, mu_{1}Is the average gray level; w is a_{2}Is the proportion of background pixels in the whole image, mu_{2}Is the average gray level; mu.s_{T}Is the overall average gray scale of the image.
After the light bar image is subjected to mean filtering processing, the filtered image is subjected to threshold segmentation by using the three methods. As shown in FIGS. 13 to 15. From the graph segmentation effect, the better effect is obtained after the light stripe image is segmented by adopting the fixed threshold value, the light stripe information can be reserved to a great extent, and the coordinates extracted from the centers of the subsequent light stripes are more complete.
After the segmentation of the fixed threshold, some background pixel points with small areas are arranged outside the light bar area, so that the background pixel points need to be removed. The regions smaller than the threshold area are deleted by image processing, and only the light bars are left, as shown in fig. 16, which is an effect diagram after the removal of the miscellaneous points. Therefore, redundant information introduced into a complex background can be removed, and the interference to the image is reduced; the processing speed of the image can be further improved, and the efficiency of extracting the light bar centers is further improved.
Traditional center extraction algorithm
After preprocessing the light bar image collected by the CCD camera, the image only containing the light bar is obtained, and then the center extraction algorithm is used for extracting the image. The accurate extraction of the center of the stripe is an important ring in the whole measuring system and is directly related to the performance of the measuring system.
The light intensity in a section perpendicular to the light plane has a gaussian distribution. Therefore, many center extraction algorithms use the gaussian center of the stripe gray as the key to extract the center of the stripe. Through the research related to a plurality of scholars at home and abroad, the existing structured light stripe center extraction algorithm can be roughly divided into three categories: the first category, such as threshold and edge methods, regards the geometric center of a light bar as the center of the light bar; the second method, such as the extreme method and the gray scale gravity center method, obtains the center of the light bar by obtaining the energy center of the light bar; the third method is, for example, a direction template method and a Steger method, which acquire the center of a light stripe based on the direction of the stripe.
(1) Method of thresholding
The threshold method is based on that the gray value of the structural light stripe is in Gaussian distribution, firstly, two point coordinates with the same gray value can be obtained in the row direction of an image by using a set threshold, and the average position of the two point coordinates is the center of the structural light stripe. The threshold method is quick and simple, has a good extraction effect on regular laser stripes, but can generate serious center positioning deviation when the stripes are influenced and interfered by the external environment; in addition, the threshold value is properly selected and directly related to extraction of the center of the stripe, and the result of extraction has larger error when the threshold value is selected too high or too low, so that the threshold value method is rarely used independently and is generally used as primary extraction of the center of the stripe, and optimization is performed on the basis.
(2) Extreme method
The extremum method also uses the characteristic that the stripe gray is in gaussian distribution to define the local gray maximum on the cross section of the stripe as the center of the stripe. The precondition is that the gray distribution is ideally gaussian, and the gray value at the center of the stripe can be maximized. However, in the actual measurement, under the influence of the external environment, the gray scale on the stripe no longer conforms to the gaussian distribution, so that the calculated extreme point and the actual center of the stripe have larger deviation; in addition, the noise on the stripes also affects the precision of the extreme points, so that the method is poor in applicability.
(3) Gray scale center of gravity method
The gray center of gravity method is to consider the energy center of a light bar as the center of the light bar. According to the distribution of the light strip gray scale, the gray scale gravity center in the light strip direction is extracted along a certain row or column of the image, and the obtained gray scale gravity center is defined as the central position of the light strip. Taking the horizontal direction of the stripes as an example, a threshold value is set to segment the image into non0 sections, and then the gray scale gravity center of the image is calculated by using the following formula. Assuming that (p, q) is a non0 interval of the kth column, the gray scale center of gravity of the column is:
in the above formula: h is_{i}Is the gray value of the ith row and the k columns. The gravity center method has the advantages of high extraction speed, high precision and good noise suppression effect. However, when the direction of the light bar changes abruptly or the curvature changes greatly, the accuracy of extracting the center of the light bar by the gray centerofgravity method is reduced.
(4) Method of directional template
The direction template method is developed on the basis of the gray scale gravity center method. In a range where the light bar pixels are small, the direction of the line structured light bar can be considered to have four modes of transverse direction, longitudinal direction, 45 degrees leftleaning and 45 degrees rightleaning, which are denoted as K_{1}、K_{2}、K_{3}And K_{4}Element K in the template_{i}[s][t]Is greater than 0; s > 0,1,. multidot.m1; t > 0,1,. cndot.N1; i is 1,2,3, 4. The four templates respectively traverse the pixel points of each row on the optical stripe, so that the template used in convolution and maximization can be obtained, and the pixel point at the central point of the template is regarded as the central point of the optical stripe in the row. Fourdirection template K_{1}、K_{2}、K_{3}And K_{4}Comprises the following steps:
the directional template method can well reserve the detail information of the light bars, and has the advantages of broken line repair, high reliability, good robustness and the like. But the center of the extracted light bar is at the pixel level and the accuracy is not high. Because each pixel point on the light bar image needs to be respectively subjected to convolution operation with four templates, the operation amount is large, and the efficiency is low.
(5) Extraction of fringe center by Steger method
The Steger algorithm is based on a Hessian matrix, and because the direction of the maximum absolute value of the secondorder partial derivative is the same as the normal direction of the center of the stripe, the extreme value of the gray scale of the stripe can be calculated according to the Hessian matrix, and the extreme value is used as the center of the stripe to realize the center positioning of the stripe. The Hessian matrix can be expressed as:
in the above formula: g (x, y) is a twodimensional Gaussian function. Point (x)_{0},y_{0}) The normal direction of (b) corresponds to the eigenvector corresponding to the eigenvalue of the maximum absolute value in the Hessian matrix. Suppose with (x)_{0},y_{0}) Using Hessian to calculate the normal direction of the local stripe where the point is located as the base point as (n)_{x},n_{y}). And calculating the center of the light bar by using a Taylor formula for the normal direction of the point gray level distribution function, and then:
z(x_{0}+tn_{x},y_{0}+tn_{y})＝Z(x_{0},y_{0})+N(r_{x},r_{y})+NH(x,y)N^{T}/2
the central position of the stripe is (x) obtained by the formula_{0}+tn_{x},y_{0}+tn_{y})。
The fringe center obtained by the Steger method is high in precision, but when the normal direction of the fringe is calculated, at least five times of Gaussian kernel convolution of a large template is needed to be carried out on each pixel point on an image, the calculation amount is large, the calculation efficiency is low, and rapid measurement is difficult to achieve.
Center extraction algorithm based on gray scale gravity center method
In this embodiment, a center extraction algorithm based on a gray scale gravity center method is improved, and center extraction of the collected light stripe at this time is determined by performing research and analysis on characteristics of the laser stripe and advantages and disadvantages of various conventional light stripe center extraction algorithms. However, due to the characteristics of the gray scale gravity center method, the method has certain limitations, and thus the following problems occur:
(1) the light strip intensity projected by a common laser is nonuniformly distributed, the middle is bright, and the two sides are dark; when the method is applied to industry, the collected light strip brightness is dispersed and not concentrated under the influence of the surface roughness of a measured object, and the section intensity of the light strip does not conform to Gaussian distribution any more. Therefore, the light strip is affected by the noncentralized energy distribution of the light strip, and the fixed width of the light strip is not suitable by using the traditional gray scale gravity center method, so that the accuracy of the light strip is reduced.
(2) The gray scale gravity center method is to calculate the light stripes row by row or column, only the transverse direction and the longitudinal direction of the image are considered, and therefore, the method has higher extraction precision for the linear light stripes. Because the gray scale gravity center method does not consider the normal direction of the light strip, the better center extraction precision of the curved and discontinuous light strip cannot be ensured in practical use.
The embodiment aims at rough object surfaces such as forgings, and the like, and the light bar center cannot be accurately extracted by using the gray scale gravity center method, so that the gray scale gravity center method is improved. Firstly, preprocessing a collected light bar image, mainly comprising filtering and threshold segmentation, then extracting a skeleton of the light bar by using a thinning algorithm to be used as an initial central point of the light bar, calculating the normal direction of the initial central point by using a selfdefined direction template, and finally obtaining the subpixel central coordinate of the light bar by using the initial central point as the center and using a selfadaptive width weighted gray scale gravity center method along the normal direction of the initial central point to realize the accurate extraction of the light bar center.
Description of the principles of the Algorithm
The principle of the linestructured light pattern center extraction method is shown in fig. 16. The method comprises the following specific steps:
(1) and (5) image preprocessing. According to the above description, the collected fringe image is first filtered and thresholdsegmented, so as to remove the interference noise and irrelevant background information in the image and segment out the light stripe features.
(2) And (4) center extraction. When the width of the light stripe of the collected object is larger, the central line in the middle of the light stripe can more visually represent the shape of the object. The extraction process is to strip the binary image into an image with only one pixel, which is called as a skeleton. Dividing the light bar according to the threshold value set in the front, wherein the value larger than the gray value T is marked as 1; less than the grey value T, is marked 0. And extracting the fringe center according to a Zhang refinement algorithm.
In the stripe image, whether the point needs to be deleted is determined by whether 8 adjacent pixel points point by point meet the requirement of the skeleton. Suppose a foreground P_{1}Has a gray value of 1, P_{1}The 8 neighborhood pixels of (2) are shown in fig. 18.
Determining P according to the judgment conditions of the following formula_{1}Whether 8 neighborhoods satisfy the deletion condition or not, thereby determining the current point P_{1}Whether to delete.
The first step is to judge the deletion conditions as follows:
the second step judges the deleting conditions as follows:
in the formula, N (P)_{1}) Is P_{1}The number of non0 neighbors (i.e., the number of pixels marked as 1), S (P)_{1}) Represents P_{2},P_{3},...,P_{9},P_{2}The binary gray values of the points in the sequence have an accumulated number of 01 sequential arrangement, as shown in FIG. 18, a pixel point P_{1}8 neighborhood pixel points P_{2},P_{3},...,P_{9}Is 0,1 and 0 in the order of the binary grayscale value of (A), in the figure, P_{1}The number of the non0 neighbors is 4, and the number is P3, P5, P7 and P8; namely N (P)_{1}) Is 4; at the same time, in P_{2},P_{3},...,P_{9},P_{2}The binary gray values of P2P3, P4P5 and P6P7 in the sequence of (A) are ordered in the order of 01, and 3 are accumulated, namely S (P)_{1}) Is 3.
And traversing the whole image to obtain a skeleton with thinned stripes through the deletion, and finishing the extraction of the skeleton with the stripes. The resulting skeleton point p (u, v) is then defined as the initial center point. When extracting a light bar with complex curvature, the gray scale gravity center method does not consider the normal direction, so that the curved or complex part in the light bar cannot be extracted, and the extracted central coordinate information is lost, so that the normal direction of each point on the skeleton needs to be calculated. The existing method for obtaining the normal of the central line mainly comprises a Hessian matrix and a curve fitting method, but the existing methods have the problems of large computation amount, low processing speed and the like. For the problem, the direction template technology is used for calculating each skeleton point, and the normal direction of the initial center point is obtained.
Line structured light is in the pixel area, and 4 patterns of vertical, horizontal, 45 ° leftinclined and 45 ° rightinclined represent the trend of the light bar. The selection of the template is generally related to the thickness of the light stripe, the details of the light stripe are lost when the template is too large, and the trend of the light stripe cannot be reflected when the template is too small. In order to accurately describe the trend of the light bar, 4 templates T are designed in the text_{1}、T_{2}、T_{3}And T_{4}Respectively corresponding to the trend of the light bars.
Template T_{1}、T_{2}、T_{3}And T_{4}Respectively representing the vertical, horizontal, leftinclined 45 degrees and rightinclined 45 degrees of the light stripe, taking the initial central point extracted after thinning processing as the center, and respectively performing the following operation on 4 templates and the gray level image of the light stripe:
wherein k is 1,2,3, 4; c (u, v) represents the pixel gray value at the initial center point, T_{k}(i, j) represents the value at the corresponding template. For each center point, 4 different H values can be obtained, which reflect the degree of correlation between the image and the template, and the larger the H value, the higher the degree of correlation. When H ═ max { H_{1},H_{2},H_{3},H_{4}Then the closer the normal direction of the initial center point is to the direction of the kth template.
Assuming that the gray value of the pixel point (a, b) in the normal direction at the extracted initial center point p (u, v) is I (a, b), the distance from the point to the point p can be calculated by:
in the formula, the positive and negative directions of d are judged by the positive and negative directions of the normal of p, and the signs of the positive and negative directions are the same.
The width of the light bar varies, depending on the surface roughness of the object, and thus the width of the light bar in the direction normal to the initial center point at different positions is also different. Therefore, the width of the pixels of the light bars is adaptively changed, and the center extraction precision can be improved. Each initial center point is searched in its normal direction until a light bar edge is detected. And then calculating the distances between all pixel points and the initial central point in the normal direction of the initial central point of the light strip, and taking the gray value of each point in the normal direction as a weight value. And then accurately extracting by using a gray gravity center method, wherein the offset distance between the extracted p' point and the crude extracted p point is as follows:
where L is the pixel width of the light bar in the normal direction from the initial center point p and varies with the point p. I (a, b) is the gray value, i.e. the weight value, of the pixel point (a, b) in the normal direction.
The exact center coordinate extracted on the normal is p' (u)_{p},v_{p})：
In the formula, θ is an angle between a vector in the normal direction and the X coordinate axis. The light bar center coordinates with different widths can be calculated in a selfadaptive mode through the formula.
Results of the experiment
At present, the extraction accuracy of the fringe centers is not a uniform verification standard, so in order to check the effect of the above algorithm, the center of different types of light stripes is extracted by using an improved algorithm, as shown in fig. 19 to 21. Various conventional center extraction algorithms are then used to compare with the abovedescribed modified algorithms to verify the extraction effectiveness of the algorithms herein. And selecting a 1280 × 960 curved light bar image for center extraction, wherein the images 2225 are respectively an extraction effect graph of each algorithm.
In order to test the efficiency of each algorithm, the present embodiment measured the running time of extracting the center coordinates of the curved light bar using a program, and the following table shows the running time of each algorithm.
As can be seen from the figure, the algorithm herein can extract the centers of different types of light bars accurately and completely. When different algorithms are used for center extraction of curved light bars (fig. 2124), the extreme method (fig. 22) can be found to have unstable extraction and low precision; the center coordinate extracted by the traditional gray scale gravity center method (figure 23) has loss phenomenon and incomplete information; both the improved center extraction algorithm of the gray scale gravity center method (figure 21) and the improved Steger method (figure 24) can accurately extract along with the trend of the light bars, and the phenomena of center coordinate loss, incompleteness and the like do not occur. But the running time of the improved gray scale gravity center method is greatly reduced compared with that of the Steger method, and the running efficiency is improved. The accuracy and the efficiency of the improved algorithm are greatly improved compared with those of the traditional center extraction algorithm, and the use of the system is met.
Multiview point cloud processing
Because the line structured light can only acquire the point cloud data of one surface of the forging in each scanning, the scanning measurement of multiple visual angles is needed to be carried out to obtain the complete threedimensional point cloud data of the forging. The point clouds under different viewing angles depend on the common part to carry out data alignment, so that the point clouds are correctly matched with the common part of each point cloud to obtain complete point cloud data. The point cloud processing is shown in fig. 25.
Point cloud preprocessing
The center extraction algorithm is to obtain twodimensional subpixel coordinates with accurate light bar centers, and the conversion to the threedimensional coordinates can be realized through a model established in the early stage, so that the threedimensional point cloud data of the surface of the measured object is obtained. The roughness of the surface of the measured object, the illumination environment and the measuring equipment can generate errors to a certain extent in the process of collecting images, and the errors can cause 'bad points' to appear in the point cloud data, so that the point cloud needs to be subjected to smooth denoising before splicing. Due to the fact that the number of point clouds obtained through multiple scanning is large, splicing efficiency is reduced, and therefore point clouds need to be thinned. Therefore, the point cloud is preprocessed before measurement, so that the quality of the point cloud can be improved, and the precision of threedimensional measurement can be improved.
Point cloud denoising
The outlier noise does not affect the overall profile model of the measured object, but false information is caused to cause errors in threedimensional measurement, so that the point cloud is denoised, false points in the point cloud can be effectively reduced, and the accuracy of reconstruction and measurement results is improved. The normal vector and curvature change of the sampling point can be obtained by evaluating the local point cloud characteristics, so that the point cloud is denoised and separated. However, the algorithm has long running time, low efficiency and complex operation, so that the calculation result is easy to make mistakes. Therefore, a statistical method is selected to denoise the point cloud, a threshold value is set firstly, the distance between a point in the threshold value range and the nearby points around the point is averaged, and the average value is calculated to obtain a result in Gaussian distribution. If a certain point is more distant from the sampling point than the average distance, it is considered that the noise point needs to be removed.
Point cloud sparsification
No matter what equipment is used for acquiring data, the obtained point cloud data are massive, and excessive data redundancy can cause overlarge data quantity and influence the laterstage point cloud splicing efficiency. Therefore, after the point cloud is denoised and smoothed, the point cloud needs to be thinned in order to reduce the data volume and improve the splicing efficiency.
And calculating the average distance between the point clouds in a specific space by using a sparsification method based on average distance points, and taking the average distance as a standard. In the dense point cloud area, the distance between points is smaller than a set standard; and in the sparse point cloud area, the distance between the points is larger than the set standard distance. Therefore, the thinning can be realized through the average distance between the point clouds to judge whether the data points need to be deleted or not.
Multiview point cloud stitching
Multiview point cloud data based on line structured light is typically acquired based on two relative motions. Firstly, the coordinate system is unchanged, and the measured object is placed according to different postures relative to the measuring equipment; secondly, the measured object is placed on the workbench, and the measuring equipment carries out multiview scanning on the measured object. In any way, the same point on the measured object needs to be subjected to coordinate conversion relative to different reference coordinate systems, so that the coordinate systems are unified.
The current popular splicing method mainly comprises a mechanical absolute positioning splicing method, a multiview label positioning splicing method and an ICP (inductively coupled plasma) splicing method.
The turntable method is the most common method in mechanical absolute positioning. The turntable method is characterized in that the relative positions of a camera coordinate system and a turntable coordinate system are unchanged, and the point cloud conversion only needs to determine the rotation angle through a rotation platform so as to obtain a rotation matrix, and the point cloud data under different visual angles can be spliced. The method is rapid, simple and efficient, but has higher requirements on hardware equipment such as turntable precision and the like.
In industry, for point cloud splicing of large objects, a multiview label positioning and splicing algorithm is mostly adopted. The method generally comprises a fixed sphere method, a plane method, a point location method and the like, and the key point is that the matching error rate is minimized by using mark points. The marking points are generally circular, and are adhered to the surface of a measured object to collect point clouds at different viewing angles, so that positioning and splicing are realized.
ICP is the establishment of an error function E (R, t) between two coordinate systems by iteratively reducing the registration error to obtain the best matching point in the different coordinate systems. If the error function E (R, t) can be minimized, an optimal transformation matrix can be obtained. Although the precision of the ICP algorithm is high, when the point cloud density is large, pairing errors occur, and R and t obtained by solving are inaccurate. And the initial registration value or the corresponding point matching strategy is not properly selected, the algorithm may be trapped in a local minimum value and cannot be converged, and a correct result cannot be obtained.
For largesize industrial objects, marking points are generally added on the surface of the object, so that point cloud splicing of the object under each view angle is realized. The method is mainly characterized in that the spatial matching relation of point clouds under different visual angles is obtained through identification of information carried by the mark points. To facilitate the extraction of the center of the circle, a mark point with a concentric circle of 10mm outside the inner 6mm is used, and the shape of the mark point is shown in fig. 26.
The splicing principle based on the mark points is that the mark points are introduced and are pasted on the surface of a measured object or pasted in a background, the edges of the mark points are extracted through an image processing method, and the circle center coordinates of the mark points are obtained through fitting, so that the threedimensional coordinates of the mark points under different visual angles are extracted. The mutual transformation relation of the coordinates of the reference points under different viewing angles is obtained through calculation, so that the point cloud of the measured object under a plurality of viewing angles is spliced, namely, the transformation of two coordinate systems is realized by utilizing a transformation matrix between the coordinates of the reference points, the calculation of a rotation matrix R and a translational vector T is realized, and the point cloud is spliced to obtain complete object model information.
The mathematical model for multiview stitching based on marker points is shown in fig. 27. A. B and C are common marker points at different viewing angles. During measurement, the mark points can be acquired by measurement at different viewing angles, and the coordinates of the circle center of the mark points are obtained by image processing. The conversion between the two coordinate systems can be realized as long as more than or equal to 3 common reference points exist in the measurement.
Building method based on least square method splicing theory
The marking points are randomly distributed in the overlapping area of the two visual angles, and the point sets of the marking points are assumed to be respectively:
M＝{m_{i}m_{i}∈M,i＝1,2,3,...,k}
N＝{n_{i}n_{i}∈N,i＝1,2,3,...,k}
obtaining a matrix A by calculating the distance between any two points in the M point set:
therefore, a distance matrix B composed of any two points in the point set N can also be obtained. Since the index points are fixed, the mutual positions of the index points do not change at different viewing angles. The following conditions need to be satisfied:
(1) if the difference between the two distance values does not exceed α, the two distance values are considered equal.
(2) And if the number of the distance values alpha is not exceeded by N (N is the number of the mark points), the mark points under the two fields of view are considered to be corresponding matching points.
Through the above principle, the point sets of the marked points under different viewing angles are respectively S ═ S_{i}s_{i}E.s, i ═ 1,2,3_{i}t_{i}E, T, i ═ 1,2, 3. And satisfying the following formula to reach the minimum value of the target function, thus obtaining transformation matrixes R and T between the threedimensional data under different visual angles and completing the point cloud matching.
To solve the transformation matrices R and T, a singular value decomposition method based on a threepoint matrix is used herein. Set of points for marker points under different fields of view S ═ S_{i}s_{i}E.s, i ═ 1,2,3_{i}t_{i}E.g., T, i ═ 1,2, 3. If the coordinate values of the three points obtained under the first visual angle are respectively s_{1}、s_{2}、s_{3}(ii) a The coordinates of the three points measured at the second visual angle are respectively t_{1}、t_{2}、t_{3}The transformation relationship through the matrix can express the mutual transformation relationship.
R＝[v]^{1}[w] (5.3)
T＝t_{1}s_{1}[v]^{1}[w] (5.4)
Substituting (5.3) and (5.4) into (5.2) to obtain the coordinate transformation relation as follows:
w and v in the formula are unit orthogonal coordinate vectors composed of the mark points.
Because of the error in the measuring system, the error is substituted intoWherein delta_{i}Is the measurement error of the system. The objective function based on the least squares method is as follows:
calculating s 'and t', i ═ 1, 2.
s′_{i}＝s_{i}s (5.9)
Substituting equations (5.9) and (5.10) into (5.6) yields the objective function as:
the rotation matrix R can be converted into the minimum value K of the function of solving the least square method through the objective function, and the translation matrix T can be obtained through TRs conversion and solving. And matching the point clouds under different visual angles into the target point cloud after rigid transformation matrixes R and T under different visual angles are obtained.
The above description is only exemplary of the present invention and should not be taken as limiting, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A method for extracting the center of line structured light is characterized by comprising the following steps: firstly, filtering and threshold segmentation are carried out on the collected image with the linear light stripes of the linear structure, and the light stripe characteristics of the linear structure light are segmented; carrying out binarization processing on each pixel point in the optical strip characteristics, and preliminarily extracting an initial central point p (u, v); taking the extracted initial central points as centers, and determining the normal direction of each initial central point by adopting a direction template method; and (4) performing midpoint extraction in the normal direction of each initial central point by adopting a gray scale gravity center method, and extracting an accurate central point.
2. The method for extracting the center of line structured light according to claim 1, wherein the initial center point p (u, v) is extracted by the following steps:
carrying out binarization processing on each pixel point in the light bar characteristics, comparing the gray value of each pixel point with a threshold value T extracted from the light bar characteristics, and marking the pixel points larger than the threshold value T as 1 or marking the pixel points as 0; scanning each pixel point P1 point by point, if the pixel point P1 is marked as 1, 8 pixel points P adjacent to the pixel point P1 are scanned_{2},P_{3},...,P_{9}The pixel point P is judged as follows_{2},P_{3},...,P_{9}Along a pixel pointThe P1 is arranged in sequence in the clockwise direction, if the following formula is satisfied:
deleting the pixel point P1, and extracting the reserved pixel point P1 as an initial central point P (u, v), wherein N (P) is_{1}) Is P_{2},P_{3},...,P_{9}Number marked 1, S (P)_{1}) Represents P_{2},P_{3},...,P_{9},P_{2}The cumulative number of times the tags in the sequence satisfy the 01 ordering.
3. The method for extracting the center of line structured light according to claim 1, wherein the step of determining the normal direction of the initial center point is as follows:
taking the extracted initial central point as a center, and performing the following operation on the gray level image of the line structured light strip:
in the formula, T_{k}Setting k as 1,2,3 and 4, setting m as the total row number of the template matrix, setting n as the total column number of the template matrix, and setting u and v as the abscissa and the ordinate of the initial central point p in the pixel coordinate system respectively; c (u, v) represents the pixel gray value of the initial center point, T_{k}(i, j) represents the values of i rows and j columns on the corresponding template matrix, i and j being the number of rows and columns, respectively, wherein,
T_{1}、T_{2}、T_{3}and T_{4}Respectively representing template matrixes which are vertical, horizontal, 45 degrees inclined to the left and 45 degrees inclined to the right with the light stripes, and selecting H for each initial central point_{k}And (u, v) taking the trend of the template matrix corresponding to the maximum value in the (u, v) as the normal direction of the initial central point.
4. The method for extracting the center of line structured light according to claim 1, wherein after the normal direction of the initial center point is determined, the precise center point is extracted by the following steps:
searching each initial central point in the normal direction until the edge of the light strip is detected, calculating the distance d between each pixel point and the initial central point in the normal direction of the initial central point of the light strip, taking the gray value of each point in the normal direction as a weight value, and extracting the central point by using a gray gravity center method, wherein the offset distance between the extracted p' point and the preextracted p point is as follows:
wherein, L is the pixel width of the initial central point p in the normal direction, I (a, b) is the gray value of the pixel point (a, b) in the normal direction, a and b are the row number and column number of the pixel point in the twodimensional matrix of the pixel coordinate system, respectively;
the exact center coordinate extracted on the normal is p' (u)_{p},v_{p}) Satisfies the following conditions:
in the formula, θ is an angle between a vector in the normal direction and the X coordinate axis.
5. A threedimensional measurement method for a forged piece is characterized in that during measurement, line structured light is adopted to scan the surface of the forged piece, the center extraction method of the line structured light as claimed in any one of claims 1 to 5 is adopted, the center coordinate data of each line structured light is extracted, complete threedimensional point cloud data is obtained, and measurement is completed.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN202110652167.6A CN113324478A (en)  20210611  20210611  Center extraction method of line structured light and threedimensional measurement method of forge piece 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN202110652167.6A CN113324478A (en)  20210611  20210611  Center extraction method of line structured light and threedimensional measurement method of forge piece 
Publications (1)
Publication Number  Publication Date 

CN113324478A true CN113324478A (en)  20210831 
Family
ID=77420515
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN202110652167.6A Pending CN113324478A (en)  20210611  20210611  Center extraction method of line structured light and threedimensional measurement method of forge piece 
Country Status (1)
Country  Link 

CN (1)  CN113324478A (en) 
Cited By (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN113470056A (en) *  20210906  20211001  成都新西旺自动化科技有限公司  Subpixel edge point detection method based on Gaussian model convolution 
CN114648526A (en) *  20220518  20220621  武汉精立电子技术有限公司  Image dead pixel processing method, storage medium, electronic device and system 
CN115953459A (en) *  20230310  20230411  齐鲁工业大学(山东省科学院)  Method for extracting laser stripe center line under complex illumination condition 
CN116681892A (en) *  20230602  20230901  山东省人工智能研究院  Image precise segmentation method based on multicenter polar mask model improvement 
CN116817796A (en) *  20230823  20230929  武汉工程大学  Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses 
WO2024055788A1 (en) *  20220915  20240321  珠海一微半导体股份有限公司  Laser positioning method based on image informaton, and robot 
Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

DE4135959A1 (en) *  19911031  19930506  Leica Heerbrugg Ag, Heerbrugg, Ch  METHOD FOR MEASURING THE SLOPES OF INTERFACE IN AN OPTICAL SYSTEM 
CN101285732A (en) *  20080528  20081015  中国科学院光电技术研究所  Heavy caliber paraboloidal mirror checking system 
CN104897174A (en) *  20150619  20150909  大连理工大学  Image light stripe noise suppression method based on confidence evaluation 
CN104915636A (en) *  20150415  20150916  北京工业大学  Remote sensing image road identification method based on multistage frame significant characteristics 
CN109344702A (en) *  20180823  20190215  北京华捷艾米科技有限公司  Pedestrian detection method and device based on depth image and color image 
CN110615016A (en) *  20191011  20191227  中国铁道科学研究院集团有限公司  Calibration method and verification method of steel rail profile and abrasion detection system 
CN112241964A (en) *  20200922  20210119  天津大学  Light strip center extraction method for line structured light noncontact measurement 

2021
 20210611 CN CN202110652167.6A patent/CN113324478A/en active Pending
Patent Citations (7)
Publication number  Priority date  Publication date  Assignee  Title 

DE4135959A1 (en) *  19911031  19930506  Leica Heerbrugg Ag, Heerbrugg, Ch  METHOD FOR MEASURING THE SLOPES OF INTERFACE IN AN OPTICAL SYSTEM 
CN101285732A (en) *  20080528  20081015  中国科学院光电技术研究所  Heavy caliber paraboloidal mirror checking system 
CN104915636A (en) *  20150415  20150916  北京工业大学  Remote sensing image road identification method based on multistage frame significant characteristics 
CN104897174A (en) *  20150619  20150909  大连理工大学  Image light stripe noise suppression method based on confidence evaluation 
CN109344702A (en) *  20180823  20190215  北京华捷艾米科技有限公司  Pedestrian detection method and device based on depth image and color image 
CN110615016A (en) *  20191011  20191227  中国铁道科学研究院集团有限公司  Calibration method and verification method of steel rail profile and abrasion detection system 
CN112241964A (en) *  20200922  20210119  天津大学  Light strip center extraction method for line structured light noncontact measurement 
NonPatent Citations (1)
Title 

曾超: "线结构光光条中心提取算法", 《中国图象图形学报》 * 
Cited By (9)
Publication number  Priority date  Publication date  Assignee  Title 

CN113470056A (en) *  20210906  20211001  成都新西旺自动化科技有限公司  Subpixel edge point detection method based on Gaussian model convolution 
CN113470056B (en) *  20210906  20211116  成都新西旺自动化科技有限公司  Subpixel edge point detection method based on Gaussian model convolution 
CN114648526A (en) *  20220518  20220621  武汉精立电子技术有限公司  Image dead pixel processing method, storage medium, electronic device and system 
WO2024055788A1 (en) *  20220915  20240321  珠海一微半导体股份有限公司  Laser positioning method based on image informaton, and robot 
CN115953459A (en) *  20230310  20230411  齐鲁工业大学(山东省科学院)  Method for extracting laser stripe center line under complex illumination condition 
CN116681892A (en) *  20230602  20230901  山东省人工智能研究院  Image precise segmentation method based on multicenter polar mask model improvement 
CN116681892B (en) *  20230602  20240126  山东省人工智能研究院  Image precise segmentation method based on multicenter polar mask model improvement 
CN116817796A (en) *  20230823  20230929  武汉工程大学  Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses 
CN116817796B (en) *  20230823  20231124  武汉工程大学  Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses 
Similar Documents
Publication  Publication Date  Title 

CN113324478A (en)  Center extraction method of line structured light and threedimensional measurement method of forge piece  
CN110497187B (en)  Sun flower pattern assembly system based on visual guidance  
CN107578464B (en)  Conveyor belt workpiece threedimensional contour measuring method based on line laser scanning  
Xu et al.  Line structured light calibration method and centerline extraction: A review  
US9704232B2 (en)  Stereo vision measurement system and method  
CN110230998B (en)  Rapid and precise threedimensional measurement method and device based on line laser and binocular camera  
CN115170669B (en)  Identification and positioning method and system based on edge feature point set registration and storage medium  
CN110763204B (en)  Planar coding target and pose measurement method thereof  
CN114494045A (en)  Largescale straight gear geometric parameter measuring system and method based on machine vision  
CN114252449B (en)  Aluminum alloy weld joint surface quality detection system and method based on line structured light  
JP2021168143A (en)  System and method for efficiently scoring probe in image by vision system  
WO2022126870A1 (en)  Threedimensional imaging method and method based on light field camera and threedimensional imaging measuring production line  
CN111402411A (en)  Scattered object identification and grabbing method based on line structured light  
CN111402330A (en)  Laser line key point extraction method based on plane target  
CN115096206A (en)  Part size highprecision measurement method based on machine vision  
CN114612412A (en)  Processing method of threedimensional point cloud data, application of processing method, electronic device and storage medium  
CN113674218A (en)  Weld characteristic point extraction method and device, electronic equipment and storage medium  
CN116625258A (en)  Chain spacing measuring system and chain spacing measuring method  
CN115953550A (en)  Point cloud outlier rejection system and method for line structured light scanning  
CN114963981B (en)  Cylindrical part butt joint noncontact measurement method based on monocular vision  
CN113970560B (en)  Defect threedimensional detection method based on multisensor fusion  
Liu et al.  Outdoor camera calibration method for a GPS & camera based surveillance system  
CN114926531A (en)  Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field  
CN114998448A (en)  Method for calibrating multiconstraint binocular fisheye camera and positioning space point  
CN114485433A (en)  Threedimensional measurement system, method and device based on pseudorandom speckles 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
WD01  Invention patent application deemed withdrawn after publication  
WD01  Invention patent application deemed withdrawn after publication 
Application publication date: 20210831 