CN113324478A - Center extraction method of line structured light and three-dimensional measurement method of forge piece - Google Patents

Center extraction method of line structured light and three-dimensional measurement method of forge piece Download PDF

Info

Publication number
CN113324478A
CN113324478A CN202110652167.6A CN202110652167A CN113324478A CN 113324478 A CN113324478 A CN 113324478A CN 202110652167 A CN202110652167 A CN 202110652167A CN 113324478 A CN113324478 A CN 113324478A
Authority
CN
China
Prior art keywords
point
center
light
pixel
central point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110652167.6A
Other languages
Chinese (zh)
Inventor
余永维
杜柳青
孙兆军
王康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202110652167.6A priority Critical patent/CN113324478A/en
Publication of CN113324478A publication Critical patent/CN113324478A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Abstract

The invention discloses a center extraction method of line structured light and a three-dimensional measurement method of a forge piece, wherein the center extraction method of the line structured light comprises the following steps of firstly carrying out filtering and threshold segmentation on an acquired image with line structured light stripes, segmenting light stripe characteristics of the line structured light, then extracting an initial central point p (x, y) of the line structured light stripes, determining the normal direction of each initial central point by adopting a direction template method by taking the initial central point p (x, y) as a central point, and extracting an accurate central point by adopting a gray scale gravity center method in the normal direction of each initial central point. The method for extracting the center of line structured light has the advantages of high precision, good stability and the like, and the method for measuring the three-dimensional forging has the advantages of high detection precision and the like.

Description

Center extraction method of line structured light and three-dimensional measurement method of forge piece
Technical Field
The invention relates to the technical field of detection and measurement, in particular to a center extraction method of line structured light and a three-dimensional measurement method of a forge piece.
Background
With the development of the modern manufacturing level, a large number of irregular curved surfaces in a product need to be measured in three dimensions for detection and verification. The three-dimensional size of the forged piece is an important index for measuring the machining quality, so that the accurate measurement of the three-dimensional size of the forged piece not only can evaluate whether the quality of the forged piece is qualified, but also provides important feedback information for later design improvement and promotes the optimization of the machining process. The traditional laser scanner and the three-coordinate measuring instrument are expensive. The binocular camera three-dimensional measurement is affected by the working environment, so that the measurement stability is poor.
The line structured light vision measurement is non-contact measurement combining laser scanning and vision processing technology, has the advantages of high measurement precision, strong stability, real-time performance and universality, and becomes a research hotspot in solving the three-dimensional measurement problem at home and abroad. Many scholars accurately extract the sub-pixel coordinates of the light stripes by improving the line structure light stripe center extraction algorithm, and then improve the three-dimensional measurement precision.
However, after the line structured light is projected onto the surface of the object to be measured, a certain width exists, which causes measurement errors, and how to accurately extract the center of the line structured light becomes a problem to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a high-precision and good-stability line structured light extraction method and a three-dimensional forge piece measurement method for improving detection precision.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for extracting the center of line structured light is characterized by comprising the following steps: firstly, filtering and threshold segmentation are carried out on the collected image with the linear light stripes of the linear structure, and the light stripe characteristics of the linear structure light are segmented; carrying out binarization processing on each pixel point in the optical strip characteristics, and preliminarily extracting an initial central point p (u, v); taking the extracted initial central points as centers, and determining the normal direction of each initial central point by adopting a direction template method; and (4) performing midpoint extraction in the normal direction of each initial central point by adopting a gray scale gravity center method, and extracting an accurate central point.
Further, the step of extracting the initial central point p (u, v) is as follows:
carrying out binarization processing on each pixel point in the light bar characteristics, comparing the gray value of each pixel point with a threshold value T extracted from the light bar characteristics, and marking the pixel points larger than the threshold value T as 1 or marking the pixel points as 0; scanning each pixel point P1 point by point, if the pixel point P1 is marked as 1, 8 pixel points P adjacent to the pixel point P1 are scanned2,P3,...,P9The pixel point P is judged as follows2,P3,...,P9Set gradually along the clockwise of pixel P1, if satisfy the following equation:
Figure BDA0003111373870000021
Figure BDA0003111373870000022
deleting the pixel point P1, and extracting the reserved pixel point P1 as an initial central point P (u, v), wherein N (P) is1) Is P2,P3,...,P9Number marked 1, S (P)1) Represents P2,P3,...,P9,P2The cumulative number of times the tags in the sequence satisfy the 0-1 ordering.
Further, the step of determining the normal direction of the initial center point is as follows:
taking the extracted initial central point as a center, and performing the following operation on the gray level image of the line structured light strip:
Figure BDA0003111373870000023
in the formula, TkSetting k as 1,2,3 and 4, setting m as the total row number of the template matrix, setting n as the total column number of the template matrix, and setting u and v as the abscissa and the ordinate of the initial central point p in the pixel coordinate system respectively; c (u, v) represents the pixel gray value of the initial center point, Tk(i, j) represents the value of i row and j column on the corresponding template matrixI and j are the number of rows and columns, respectively, wherein,
Figure BDA0003111373870000024
Figure BDA0003111373870000025
Figure BDA0003111373870000031
Figure BDA0003111373870000034
T1、T2、T3and T4Respectively representing template matrixes which are vertical, horizontal, 45 degrees inclined to the left and 45 degrees inclined to the right with the light stripes, and selecting H for each initial central pointkAnd (u, v) taking the trend of the template matrix corresponding to the maximum value in the (u, v) as the normal direction of the initial central point.
Further, after the normal direction of the initial central point is determined, the following steps are adopted to extract an accurate central point:
searching each initial central point in the normal direction until the edge of the light strip is detected, calculating the distance d between each pixel point and the initial central point in the normal direction of the initial central point of the light strip, taking the gray value of each point in the normal direction as a weight value, and extracting the central point by using a gray gravity center method, wherein the offset distance between the extracted p' point and the pre-extracted p point is as follows:
Figure BDA0003111373870000032
wherein, L is the pixel width of the initial central point p in the normal direction, I (a, b) is the gray value of the pixel point (a, b) in the normal direction, a and b are the row number and column number of the pixel point in the two-dimensional matrix of the pixel coordinate system, respectively;
the exact center coordinate extracted on the normal is p' (u)p,vp) Satisfies the following conditions:
Figure BDA0003111373870000033
in the formula, θ is an angle between a vector in the normal direction and the X coordinate axis.
A three-dimensional measurement method for a forged piece is characterized in that during measurement, linear structured light is adopted to scan the surface of the forged piece, and the central coordinate data of each linear structured light is extracted by the central extraction method of the linear structured light as claimed in any one of claims 1-5 to obtain complete three-dimensional point cloud data, so that the measurement is completed.
In conclusion, the line structured light center extraction method has the advantages of high precision, good stability and the like, and the three-dimensional measurement method of the forge piece has the advantages of high detection precision and the like.
Drawings
Fig. 1 is a schematic structural diagram of the overall frame layout of the system.
Fig. 2 is a system flow chart of three-dimensional measurement.
FIG. 3 is a pixel coordinate system and an image coordinate system.
Fig. 4 is a camera coordinate system.
FIG. 5 is a diagram of a line-plane measurement model.
FIG. 6 is a schematic diagram of a coordinate system relationship of the scanning system.
Fig. 7 is a schematic diagram of direct three-dimensional measurement.
Fig. 8 is a schematic diagram of oblique three-dimensional measurement.
Fig. 9 is a schematic view of the installation of the camera and the laser projector in this embodiment.
FIG. 10 is a graph of the gray scale profile of an actual light bar cross section
FIG. 11 is a captured picture of a workpiece with light bars.
Fig. 12 is a diagram illustrating a mean filtering process performed on the collected light bar image.
Fig. 13 is a graph showing the effect of using fixed threshold segmentation.
FIG. 14 is a graph of the effect of using adaptive threshold segmentation.
Fig. 15 is a graph of the effect of using OTSU threshold segmentation.
Fig. 16 is a graph showing the effect of removing the spots.
Fig. 17 is a principle of the line structured light stripe center extraction method in the present embodiment.
Fig. 18 is a schematic diagram of 8 neighborhoods of P1.
Fig. 19-21 are schematic diagrams of the effect of extracting the light center of the line structure by using the improved algorithm of the present invention.
Fig. 22 to 24 are schematic diagrams illustrating the effect of extracting the light center of the line structure by using the conventional method.
FIG. 25 is a flow chart of point cloud processing.
FIG. 26 is a schematic view of a mark.
Fig. 27 is a diagram of a mathematical model for multi-view stitching based on mark points.
Detailed Description
The present invention will be described in further detail with reference to examples.
In the specific implementation: a three-dimensional measurement method for a forge piece based on line structure light is characterized in that a three-dimensional measurement system is firstly constructed, and since the line structure light can only measure three-dimensional information of one section of the surface of a measured object at each time, in order to realize integral scanning of the forge piece, external driving equipment is required to drive a measurement device to scan a plurality of positions of the forge piece. The robot has the advantages of flexible operation, increased measuring range and no environmental limitation, and the robot is used as a mobile carrier to drive the line structure optical vision measuring equipment to realize the scanning of the system. As shown in fig. 1, the system is mainly divided into a hardware module and an algorithm module. The hardware module is composed of a line structured light projector, a robot, a camera and a computer, and the algorithm module is mainly composed of a calibration module and an image processing module.
In the measuring system, a hardware module consists of scanning measuring equipment and a moving mechanism, and the scanning measuring system consisting of a camera and a laser transmitter is fixed on an end effector of a robot at a certain angle and position. The robot is mainly connected with a computer through control cabinet TCP/IP communication, the motion of the mechanical arm is controlled, and a demonstrator is used for checking parameter information and motion state of the robot.
The calibration module in the algorithm module comprises camera parameter calibration, line structured light calibration and hand-eye relationship calibration, and various parameters in the scanning system are obtained through the calibration, and a model is established. The image processing module comprises image acquisition, preprocessing, fringe center extraction, coordinate conversion, point cloud processing and point cloud display. As shown in fig. 2, the system flow of three-dimensional measurement includes the following specific operation steps: firstly, calibrating an installed line structure light vision measuring system, then driving a line structure light vision measuring device by a mechanical arm to scan a forge piece, continuously collecting images through a camera, extracting central sub-pixel coordinates of light bars after image processing, and finally completing conversion from two-dimensional central point coordinates of the light bars to three-dimensional physical coordinates by using calibrated parameters, so as to realize acquisition of three-dimensional point cloud data of the forge piece and further complete three-dimensional measurement.
Camera imaging model
As shown in fig. 3, a coordinate system is established on the imaging plane of the camera, namely an image coordinate system and a pixel coordinate system. The pixel coordinate system holds the coordinates (u, v) in the form of a two-dimensional array, each coordinate not representing a real size but representing the position of the coordinate in the array in units of pixels. The x axis and the y axis in the image coordinate system are respectively parallel to the u axis and the v axis, and the physical image is described through the angle of the pixel, so that the conversion of the two-dimensional pixel coordinate and the physical size is realized.
In both coordinate systems, the relationship between arbitrary points in the image is converted to the following equation.
Figure BDA0003111373870000051
Figure BDA0003111373870000052
In the formula, dx and dy represent physical lengths in the x-axis and the y-axis, xo0The origin of the y coordinate system is represented as (u) in the uov coordinate system0,v0)。
As shown in FIG. 4, with the optical center position OcFor the origin, establish the over-origin OcAnd parallel to the X, y axes of the image coordinate systemcAnd YcAnd obtaining a camera coordinate system. And Z of the camera coordinate systemcOrigin O of the axis image coordinate system1Then the focal length f of the camera can be O1And OcThe distance between them is indicated. P (X) in the camera coordinate systemc,Yc,Zc) The following relationship can be expressed:
Figure BDA0003111373870000061
the world coordinate system is used for describing the mapping relation between the image and the measured object, and is used as the absolute coordinate system of the system, and the coordinate system is the size mapping between the object and the image. Therefore, the camera coordinate can be converted into the world coordinate through rotation and translation, the conversion between the camera coordinate and the world coordinate belongs to rigid body conversion, and no deformation phenomenon occurs. Let a point P on the surface of the measured object have a world coordinate system with a coordinate of (X)w,Yw,Zw) The corresponding coordinate in the camera coordinate system is (X)c,Yc,Zc). The relationship between the rotation matrix R and the translation matrix T can be obtained by converting the two matrices as follows:
Figure BDA0003111373870000062
r and T in the above formula are external parameters of the camera, independent of the camera system itself. R is an orthogonal rotation matrix of 3 × 3, representing the direction of the camera; t is a 3 × 1 matrix representing the translational position of the camera. Then through a homogeneous coordinate transformation:
Figure BDA0003111373870000063
the line structured light vision measuring technique is that after a laser projector emits laser, a light beam is intersected with the surface of an object to form a bright stripe. The position of the collected light stripe image on the camera imaging plane can also be changed because the light stripes are modulated at different positions due to the difference of the surface topography and the height of the object. The camera collects the stripe image after modulation distortion from a certain angle, and the position between the camera and the laser projector is obtained, so that the three-dimensional coordinates of the stripe can be calculated, and the three-dimensional information of the surface of the object can be obtained. The line structured light three-dimensional measurement mathematical model can be generally divided into an analytic geometric model and a perspective projection model:
and (6) analyzing the geometric model. The method needs to acquire geometric parameters such as an included angle and a position between a camera and a laser projector when the camera and the laser projector are installed, and an analytic geometric model is established according to the geometric parameters. The surface coordinates of the space object can be obtained through the model, and the three-dimensional contour of the object is recovered. However, many assumptions exist for establishing the model, such as the required angle between the light plane and the camera, the angle of the optical axis of the camera, and other constraints. The use of this method requires precise adjustment of the position of the camera and the laser projector in order to obtain high precision geometric parameters such as angle and position. Due to the complex process and high requirement, the application range of the analytic geometric model is limited.
Perspective projection model. And solving the linear structure light plane equation or the structural parameters of the sensor. The method is convenient and suitable for many occasions and has wide application. The method of model building can be divided into: the method comprises the steps of firstly, utilizing a line-surface model to form rays by projecting a light plane on a measured object, and determining spatial three-dimensional information of light stripes by the laser plane to obtain the morphological characteristics of the measured object. The surface model is that the laser projector emits special light stripe to form the information of the surface of the measured object, the special light stripe is imaged on the camera through collection, and the measurement model between the camera imaging surface and the light plane is established.
The physical relation between the camera and the laser projector needs to be accurately obtained when the geometric model is analyzed, and the constraint equation for establishing the geometric model according to the distance and the included angle is difficult to operate and low in feasibility. Therefore, a line and surface model with better applicability is used to build the line and surface measurement model, as shown in fig. 5.
And the light plane pi is projected on the surface of the measured object to form a light plane equation. The equation for the plane of light in the camera coordinate system is expressed by:
axc+byc+czc+d=0
if a point P is known to be on the surface of the measured object, the coordinates of the point under the pixel coordinates are (u, v) after the camera is collected, the corresponding homogeneous coordinates are (u, v,1), and the image coordinates after the point is normalized are:
Figure BDA0003111373870000071
wherein A is the internal reference matrix of the camera.
In the camera coordinate system, if the coordinate of P is (x)c,yc,zc) Point P and camera optical center OcThe straight line connected is represented as:
Figure BDA0003111373870000072
and obtaining the three-dimensional coordinate of the unique determination point P under the camera coordinate system according to the light plane equation under the camera coordinate system and the formula:
Figure BDA0003111373870000073
if the three-dimensional coordinate (x) of the point P in the light plane coordinate system is to be obtainedL,yL,zL) If the rotation matrix R and the translation matrix T are obtained by camera calibration, the conversion relationship can be expressed as:
Figure BDA0003111373870000074
by the above formula, a three-dimensional coordinate uniquely corresponding to the point P can be obtained. Therefore, only the internal and external parameters of the camera and the linear structure light parameters are required to be solved through the linear structure light measurement system based on the perspective projection model.
The linear structured light can only measure the three-dimensional information of one section of the surface of the object at each time, so the scanning system combines the linear structured light vision measuring equipment with the robot and realizes the integral scanning of the measured object through the movement of the robot. First, a transformation relationship between coordinate systems needs to be established, so as to realize the coordinate transformation between the linear structure measurement system and the robot, as shown in fig. 6.
The following coordinate systems are included in the figure:
{ C }, camera coordinate system;
{ L }, linear structured light measuring coordinate system;
{ B }, the base coordinate system of the robot;
{ T }. end tool coordinate system of robot;
HLBthe structural light measures a conversion matrix between a coordinate system and a robot base coordinate system;
HLCa transformation matrix between a structured light measurement system coordinate system and a camera coordinate system;
HCTa transformation matrix between a camera coordinate system and a robot end tool coordinate system;
HTBa transformation matrix between a robot end tool coordinate system and a robot base coordinate system;
each of the above matrices is a 4 × 4 homogeneous matrix composed of a rotation matrix R and a translation matrix T. The relationship between the transformation matrices can be expressed as:
HLB=HTBHCTHLC
let the base coordinate system { B } of the robot be defined as the global coordinate system. Obtaining coordinates (x) by perspective projection modelL,yL,zL) The coordinate in the global coordinate system can be converted into (x) by the following formulaB,yB,zB)。
Figure BDA0003111373870000081
In the process of moving measurement of the robot, the camera coordinate system, the robot tool tail end coordinate system and the linear structured light measurement system coordinate system are fixed, so that the relative position does not change, namely HLCAnd HCTThe elements in both matrices are constant and constant. The robot has feedback delay, and the coordinates of the end effector cannot be transmitted to the PC in time, so H is correctedTBErrors can occur when the matrix is calculated in real time, making this approach impractical. The system will set the robot to move at a constant speed from a to B without rotating in the process, and the camera mounted on the end effector continuously collects images at a set frame rate. The distance S that the robot moves per frame is:
Figure BDA0003111373870000082
in the formula: v is the moving speed and F is the frame rate set by the camera. S represents the amount of displacement of the robot arm in the moving direction for a single frame.
A transformation matrix from the coordinate system of the end effector at the point A to a base coordinate system can be obtained through D-H kinematic modeling and is defined as HstartThe hand-eye relationship matrix is X. Then a transformation matrix H from the linear structure light coordinate system to the global coordinate system can be obtainedLBThe method comprises the following steps:
HLB=Hstart·X
if the robot arm moves in the Y-axis direction, the position of the ith frame is HiThen, the following relationship exists:
Figure BDA0003111373870000083
if the point (x) in the linear structured light measurement system coordinate system is obtained in the ith frameL-i,yL-i,zL-i) The coordinates may be converted to coordinates in a global coordinate systemPoint (x)B-i,yB-i,zB-i)。
Figure BDA0003111373870000091
The coordinate data under the linear structured light measurement system coordinate system can be converted into the coordinate data under the robot base coordinate system through the formula, and the three-dimensional point cloud data of each frame of section can be unified under the same coordinate system through the uniform motion of the robot, so that the scanning of the measured object is realized.
The camera mainly focuses the image generated by the light on the image plane, and commonly used are CMOS and CCD cameras. The difference between the two is mainly the way in which the chip is read, CCD cameras are typically frame exposed, while CMOS cameras are mostly line exposed. The image quality of the CCD camera is higher than that of the CMOS camera, and the contrast is obvious. After considering the economic and cost performance factors, the embodiment chooses to use the Haokawave CE series MV-CE013-50GM camera, which not only supports hard triggering, but also can acquire images through soft triggering, manually or automatically adjust the gain and exposure,
the lens utilizes optical equipment to present image information on an image sensor of the camera, and the quality of the lens directly influences the quality of image acquisition, thereby influencing the subsequent measurement precision. The following principles need to be noted for type selection:
target surface size of camera sensor: when the lens is selected, the optical size of the lens is larger than the size of the target surface of the camera sensor, otherwise shadow areas appear at four corners of the acquired image, and the image acquisition fails.
Camera interface type: the camera is in line with the interface of the lens.
Size of aperture: and regulating the light flux of the lens in unit time.
Focal length f: the distance from the optical center of the lens to the focusing point of the light beam is selected according to the following formula:
Figure BDA0003111373870000092
where D represents the working distance, H is the target surface size, and H is the field width. In conclusion, the system finally uses the FA251C type lens.
In a three-dimensional measurement system, the light source emitted by the laser projector will directly affect the quality of data acquisition and the effect of post-processing, thereby affecting three-dimensional measurement. For optimal measurement, the laser projector is selected according to the following aspects:
(ii) contrast
Through the projection of the light source, the characteristics of the interested region and the irrelevant region which are irradiated on the measured object need to generate the maximum contrast, thereby being convenient for the later image processing.
Robustness
Robustness means that in a complex environment, better characteristics can still be maintained, namely, the result image is not changed, and the effect is the same as that presented in an experimental environment.
Light source uniformity
Non-uniform illumination causes non-uniform reflection, which affects the stability of the system. Stabilizing a uniform light source stabilizes the captured image results.
After considering the above factors and price, this embodiment decides to use YZ650KB100-GD16, and the laser transmitter projects a narrow light band with a specific shape, and has high stability, strong interference immunity, and a suitable price.
The robot is used as a motion mechanism of the system and comprises a mechanical arm, a demonstrator and a control cabinet. The present embodiment uses a erbit robot as a mobile platform for carrying a line structured light vision measuring device, according to laboratory conditions. The robot is an industrial multipurpose robot and has the characteristics of high running speed, high repeated positioning precision, miniaturization and the like.
The three-dimensional measurement can be divided into direct projection type and oblique projection type according to different irradiation modes of the laser projector. As shown in fig. 7, which is a direct three-dimensional measurement schematic diagram, O is a point on a reference plane of a measured object, P is a corresponding point of O in an imaging plane of a camera, a distance from a center of a camera lens to the point O is a, and a distance from an image point P to the center of the camera lens is b.
As can be seen from FIG. 7, the displacement y of the point to be measured can be expressed as:
Figure BDA0003111373870000101
FIG. 8 is a schematic view of a diagonal three-dimensional measurement, θ1Is the angle theta between the laser beam and the normal of the surface of the object to be measured2The angle between the normal of the surface of the measured object and the optical axis of the receiving lens is the same as the direct projection of the upper graph. Then according to the schematic diagram, the displacement y of the point to be measured can be obtained as follows:
Figure BDA0003111373870000102
the introduction of the two installation modes shows that the direct-injection type three-dimensional measurement has smaller measurement blind area due to less shielding in actual use because the incident angle of the direct-injection type is smaller than that of the oblique-injection type. The direct-injection three-dimensional measurement with simple structure and large measurement range is adopted. After many adjustments, the present embodiment achieves the most appropriate mounting between the camera and the laser projector, as shown in fig. 9.
Line structured light center extraction
When the line laser is used as the measuring equipment, laser beams emitted by the laser projector intersect on the surface of the object to be measured to form a light bar, and the light bar contains the surface position and the contour information of the object to be measured. These light bars have a certain width, also called structured light bars. And acquiring the light strip image of the structured light by the camera to obtain the three-dimensional information of the measured object.
The linear light is projected on the surface of an object to form high-brightness light bars, and the gray values of the normal sections of the light bars are in a Gaussian distribution state under ideal conditions, so that the maximum position of the gray values of the light bars is in a central position, gradually decreases towards the left side and the right side and is in a symmetrical distribution state. However, due to the influence of factors such as environment and material of the surface of the workpiece, the gray values of the cross sections of the actually acquired light bar images do not satisfy the gaussian distribution, as shown in fig. 10. Compared with the ideal light bar section gray value distribution curve, the gray value distribution curve of the actual light bar is in a symmetrical distribution, and the gray value distribution curve of the actual light bar is in an irregular shape. It can be seen that the farther from the center of the light bar, the smaller the gray value; the closer to the center of the light bar, the larger the gray value, and the gray value curve distribution follows approximately a gaussian distribution.
In a complex environment, ambient light interference and random noise of a camera cause noise to an image acquired by a measuring system, and light bar information cannot be completely acquired due to the surface roughness of a measured object, material properties and the like, or the acquired image has background information which does not belong to a target object besides the light bar information. This will increase the difficulty of extracting the central stripe, and make the extracted result have a larger error, which will ultimately affect the measurement result.
Line structured light stripe image preprocessing
As shown in fig. 11, the collected image of the workpiece with the light bar has a reduced quality due to the quality of the laser projector and the environment, and the precise extraction of the center of the subsequent light bar is greatly affected. Therefore, in order to reduce these effects, the image is preprocessed using a filtering algorithm, which preserves the detail characteristics of the image and reduces the interference noise. The following are several common filters.
Median filtering is a nonlinear spatial domain filter. And sequentially arranging the gray values of the pixel points in the neighborhood of the template with the size of M multiplied by N by an ascending or descending method, and taking the gray value of the middle pixel point as a new gray value of the center of the template. The median filter has a good noise reduction effect, but can filter out the boundaries of the light bars, so that the extracted fringe center result is inaccurate.
The averaging filter is a common linear filter. A template is designed for a target pixel to be processed of an image, the template is composed of the target pixel and 8 pixels near the target pixel, the average value of all pixels in the template is obtained, and the average value is used for replacing the original pixel value. The mean filter algorithm is simple, the smoothing of the image is obvious, and the noise can be obviously reduced under the condition of not influencing the detail information of the image.
The gaussian filter is used as a linear low-pass filter, and can well suppress gaussian noise. The specific working principle is that each target pixel in a mask scanning processing image is designed, a mask is used for carrying out weighted average on neighborhood pixels, and then the value of a central pixel point of the mask is replaced by a weighted average gray value. The Gaussian filter can effectively filter out spatial oscillation caused by noise, but the boundary can be considered as noise to be removed, so that the edge of the image cannot be well protected, and the edge of the image becomes fuzzy.
Therefore, after each filter is effectively analyzed, the mean filter is selected to process the light bar image by combining the image actually collected by the system, so that the influence of isolated noise can be obviously reduced while the edge of the light bar image is protected. Fig. 12 shows that the collected light bar image is subjected to the mean filtering process.
As can be seen from the above figure, after the mean filtering process, not only the noise outside the fringe area in the image is removed, but also the edges of the light stripes are smoothed, thereby improving the accuracy of extracting the centers of the light stripes.
Stripe image segmentation
Because the acquired image contains the stripes and background information, the center of the stripes is directly extracted, and an accurate result cannot be obtained. Therefore, the light bar information of the image needs to be segmented, and the interference of the background information to the center extraction is eliminated, so that the complexity of the extraction of the center of the light bar is reduced. Meanwhile, the calculation amount is reduced, so that the extraction speed of the light bar center is increased, and the requirement of rapid detection is further met. Several representative threshold segmentations follow.
In the fixed threshold segmentation, a proper threshold is set to segment a background and an image target, the image is divided into two pixel groups which are greater than or equal to the threshold and smaller than the threshold, a gray level range smaller than the threshold is represented by 0 for the target, and a gray level range larger than the threshold is represented by 255 for the background. The algorithm is fast and stable in the operation process, and becomes the most basic segmentation technology which is also the most widely applied.
The local adaptive threshold segmentation algorithm determines the setting of the pixel threshold value according to the pixel values around the target pixel, so that the binarization threshold values at different positions are determined according to the distribution of the surrounding pixel values. In areas with higher brightness, the threshold will typically be high; conversely, in darker areas, the threshold will be set small. Areas with different contrast and brightness present different binarization thresholds at local parts of the image.
The maximum inter-class variance method is proposed by Otsu's Provisions in Japan and is also called Otsu's method (OTSU method). The method is characterized in that inter-class variance between a background and a target is solved according to a gray level histogram of an image, and therefore an optimal threshold value is selected for segmentation. As follows, the best segmentation effect can be achieved when the reference criterion is maximized.
σ2=w11T)2+w22T)2=w1w22T)2
In the above formula: w is a1Is the proportion of the target pixel point in the whole image, mu1Is the average gray level; w is a2Is the proportion of background pixels in the whole image, mu2Is the average gray level; mu.sTIs the overall average gray scale of the image.
After the light bar image is subjected to mean filtering processing, the filtered image is subjected to threshold segmentation by using the three methods. As shown in FIGS. 13 to 15. From the graph segmentation effect, the better effect is obtained after the light stripe image is segmented by adopting the fixed threshold value, the light stripe information can be reserved to a great extent, and the coordinates extracted from the centers of the subsequent light stripes are more complete.
After the segmentation of the fixed threshold, some background pixel points with small areas are arranged outside the light bar area, so that the background pixel points need to be removed. The regions smaller than the threshold area are deleted by image processing, and only the light bars are left, as shown in fig. 16, which is an effect diagram after the removal of the miscellaneous points. Therefore, redundant information introduced into a complex background can be removed, and the interference to the image is reduced; the processing speed of the image can be further improved, and the efficiency of extracting the light bar centers is further improved.
Traditional center extraction algorithm
After preprocessing the light bar image collected by the CCD camera, the image only containing the light bar is obtained, and then the center extraction algorithm is used for extracting the image. The accurate extraction of the center of the stripe is an important ring in the whole measuring system and is directly related to the performance of the measuring system.
The light intensity in a section perpendicular to the light plane has a gaussian distribution. Therefore, many center extraction algorithms use the gaussian center of the stripe gray as the key to extract the center of the stripe. Through the research related to a plurality of scholars at home and abroad, the existing structured light stripe center extraction algorithm can be roughly divided into three categories: the first category, such as threshold and edge methods, regards the geometric center of a light bar as the center of the light bar; the second method, such as the extreme method and the gray scale gravity center method, obtains the center of the light bar by obtaining the energy center of the light bar; the third method is, for example, a direction template method and a Steger method, which acquire the center of a light stripe based on the direction of the stripe.
(1) Method of thresholding
The threshold method is based on that the gray value of the structural light stripe is in Gaussian distribution, firstly, two point coordinates with the same gray value can be obtained in the row direction of an image by using a set threshold, and the average position of the two point coordinates is the center of the structural light stripe. The threshold method is quick and simple, has a good extraction effect on regular laser stripes, but can generate serious center positioning deviation when the stripes are influenced and interfered by the external environment; in addition, the threshold value is properly selected and directly related to extraction of the center of the stripe, and the result of extraction has larger error when the threshold value is selected too high or too low, so that the threshold value method is rarely used independently and is generally used as primary extraction of the center of the stripe, and optimization is performed on the basis.
(2) Extreme method
The extremum method also uses the characteristic that the stripe gray is in gaussian distribution to define the local gray maximum on the cross section of the stripe as the center of the stripe. The precondition is that the gray distribution is ideally gaussian, and the gray value at the center of the stripe can be maximized. However, in the actual measurement, under the influence of the external environment, the gray scale on the stripe no longer conforms to the gaussian distribution, so that the calculated extreme point and the actual center of the stripe have larger deviation; in addition, the noise on the stripes also affects the precision of the extreme points, so that the method is poor in applicability.
(3) Gray scale center of gravity method
The gray center of gravity method is to consider the energy center of a light bar as the center of the light bar. According to the distribution of the light strip gray scale, the gray scale gravity center in the light strip direction is extracted along a certain row or column of the image, and the obtained gray scale gravity center is defined as the central position of the light strip. Taking the horizontal direction of the stripes as an example, a threshold value is set to segment the image into non-0 sections, and then the gray scale gravity center of the image is calculated by using the following formula. Assuming that (p, q) is a non-0 interval of the kth column, the gray scale center of gravity of the column is:
Figure BDA0003111373870000131
in the above formula: h isiIs the gray value of the ith row and the k columns. The gravity center method has the advantages of high extraction speed, high precision and good noise suppression effect. However, when the direction of the light bar changes abruptly or the curvature changes greatly, the accuracy of extracting the center of the light bar by the gray center-of-gravity method is reduced.
(4) Method of directional template
The direction template method is developed on the basis of the gray scale gravity center method. In a range where the light bar pixels are small, the direction of the line structured light bar can be considered to have four modes of transverse direction, longitudinal direction, 45 degrees left-leaning and 45 degrees right-leaning, which are denoted as K1、K2、K3And K4Element K in the templatei[s][t]Is greater than 0; s > 0,1,. multidot.m-1; t > 0,1,. cndot.N-1; i is 1,2,3, 4. The four templates respectively traverse the pixel points of each row on the optical stripe, so that the template used in convolution and maximization can be obtained, and the pixel point at the central point of the template is regarded as the central point of the optical stripe in the row. Four-direction template K1、K2、K3And K4Comprises the following steps:
Figure BDA0003111373870000132
Figure BDA0003111373870000133
the directional template method can well reserve the detail information of the light bars, and has the advantages of broken line repair, high reliability, good robustness and the like. But the center of the extracted light bar is at the pixel level and the accuracy is not high. Because each pixel point on the light bar image needs to be respectively subjected to convolution operation with four templates, the operation amount is large, and the efficiency is low.
(5) Extraction of fringe center by Steger method
The Steger algorithm is based on a Hessian matrix, and because the direction of the maximum absolute value of the second-order partial derivative is the same as the normal direction of the center of the stripe, the extreme value of the gray scale of the stripe can be calculated according to the Hessian matrix, and the extreme value is used as the center of the stripe to realize the center positioning of the stripe. The Hessian matrix can be expressed as:
Figure BDA0003111373870000141
in the above formula: g (x, y) is a two-dimensional Gaussian function. Point (x)0,y0) The normal direction of (b) corresponds to the eigenvector corresponding to the eigenvalue of the maximum absolute value in the Hessian matrix. Suppose with (x)0,y0) Using Hessian to calculate the normal direction of the local stripe where the point is located as the base point as (n)x,ny). And calculating the center of the light bar by using a Taylor formula for the normal direction of the point gray level distribution function, and then:
z(x0+tnx,y0+tny)=Z(x0,y0)+N(rx,ry)+NH(x,y)NT/2
by
Figure BDA0003111373870000142
Obtaining:
Figure BDA0003111373870000143
the central position of the stripe is (x) obtained by the formula0+tnx,y0+tny)。
The fringe center obtained by the Steger method is high in precision, but when the normal direction of the fringe is calculated, at least five times of Gaussian kernel convolution of a large template is needed to be carried out on each pixel point on an image, the calculation amount is large, the calculation efficiency is low, and rapid measurement is difficult to achieve.
Center extraction algorithm based on gray scale gravity center method
In this embodiment, a center extraction algorithm based on a gray scale gravity center method is improved, and center extraction of the collected light stripe at this time is determined by performing research and analysis on characteristics of the laser stripe and advantages and disadvantages of various conventional light stripe center extraction algorithms. However, due to the characteristics of the gray scale gravity center method, the method has certain limitations, and thus the following problems occur:
(1) the light strip intensity projected by a common laser is non-uniformly distributed, the middle is bright, and the two sides are dark; when the method is applied to industry, the collected light strip brightness is dispersed and not concentrated under the influence of the surface roughness of a measured object, and the section intensity of the light strip does not conform to Gaussian distribution any more. Therefore, the light strip is affected by the non-centralized energy distribution of the light strip, and the fixed width of the light strip is not suitable by using the traditional gray scale gravity center method, so that the accuracy of the light strip is reduced.
(2) The gray scale gravity center method is to calculate the light stripes row by row or column, only the transverse direction and the longitudinal direction of the image are considered, and therefore, the method has higher extraction precision for the linear light stripes. Because the gray scale gravity center method does not consider the normal direction of the light strip, the better center extraction precision of the curved and discontinuous light strip cannot be ensured in practical use.
The embodiment aims at rough object surfaces such as forgings, and the like, and the light bar center cannot be accurately extracted by using the gray scale gravity center method, so that the gray scale gravity center method is improved. Firstly, preprocessing a collected light bar image, mainly comprising filtering and threshold segmentation, then extracting a skeleton of the light bar by using a thinning algorithm to be used as an initial central point of the light bar, calculating the normal direction of the initial central point by using a self-defined direction template, and finally obtaining the sub-pixel central coordinate of the light bar by using the initial central point as the center and using a self-adaptive width weighted gray scale gravity center method along the normal direction of the initial central point to realize the accurate extraction of the light bar center.
Description of the principles of the Algorithm
The principle of the line-structured light pattern center extraction method is shown in fig. 16. The method comprises the following specific steps:
(1) and (5) image preprocessing. According to the above description, the collected fringe image is first filtered and threshold-segmented, so as to remove the interference noise and irrelevant background information in the image and segment out the light stripe features.
(2) And (4) center extraction. When the width of the light stripe of the collected object is larger, the central line in the middle of the light stripe can more visually represent the shape of the object. The extraction process is to strip the binary image into an image with only one pixel, which is called as a skeleton. Dividing the light bar according to the threshold value set in the front, wherein the value larger than the gray value T is marked as 1; less than the grey value T, is marked 0. And extracting the fringe center according to a Zhang refinement algorithm.
In the stripe image, whether the point needs to be deleted is determined by whether 8 adjacent pixel points point by point meet the requirement of the skeleton. Suppose a foreground P1Has a gray value of 1, P1The 8 neighborhood pixels of (2) are shown in fig. 18.
Determining P according to the judgment conditions of the following formula1Whether 8 neighborhoods satisfy the deletion condition or not, thereby determining the current point P1Whether to delete.
The first step is to judge the deletion conditions as follows:
Figure BDA0003111373870000151
the second step judges the deleting conditions as follows:
Figure BDA0003111373870000152
in the formula, N (P)1) Is P1The number of non-0 neighbors (i.e., the number of pixels marked as 1), S (P)1) Represents P2,P3,...,P9,P2The binary gray values of the points in the sequence have an accumulated number of 0-1 sequential arrangement, as shown in FIG. 18, a pixel point P18 neighborhood pixel points P2,P3,...,P9Is 0,1 and 0 in the order of the binary gray-scale value of (A), in the figure, P1The number of the non-0 neighbors is 4, and the number is P3, P5, P7 and P8; namely N (P)1) Is 4; at the same time, in P2,P3,...,P9,P2The binary gray values of P2-P3, P4-P5 and P6-P7 in the sequence of (A) are ordered in the order of 0-1, and 3 are accumulated, namely S (P)1) Is 3.
And traversing the whole image to obtain a skeleton with thinned stripes through the deletion, and finishing the extraction of the skeleton with the stripes. The resulting skeleton point p (u, v) is then defined as the initial center point. When extracting a light bar with complex curvature, the gray scale gravity center method does not consider the normal direction, so that the curved or complex part in the light bar cannot be extracted, and the extracted central coordinate information is lost, so that the normal direction of each point on the skeleton needs to be calculated. The existing method for obtaining the normal of the central line mainly comprises a Hessian matrix and a curve fitting method, but the existing methods have the problems of large computation amount, low processing speed and the like. For the problem, the direction template technology is used for calculating each skeleton point, and the normal direction of the initial center point is obtained.
Line structured light is in the pixel area, and 4 patterns of vertical, horizontal, 45 ° left-inclined and 45 ° right-inclined represent the trend of the light bar. The selection of the template is generally related to the thickness of the light stripe, the details of the light stripe are lost when the template is too large, and the trend of the light stripe cannot be reflected when the template is too small. In order to accurately describe the trend of the light bar, 4 templates T are designed in the text1、T2、T3And T4Respectively corresponding to the trend of the light bars.
Figure BDA0003111373870000161
Figure BDA0003111373870000162
Figure BDA0003111373870000163
Figure BDA0003111373870000164
Template T1、T2、T3And T4Respectively representing the vertical, horizontal, left-inclined 45 degrees and right-inclined 45 degrees of the light stripe, taking the initial central point extracted after thinning processing as the center, and respectively performing the following operation on 4 templates and the gray level image of the light stripe:
Figure BDA0003111373870000165
wherein k is 1,2,3, 4; c (u, v) represents the pixel gray value at the initial center point, Tk(i, j) represents the value at the corresponding template. For each center point, 4 different H values can be obtained, which reflect the degree of correlation between the image and the template, and the larger the H value, the higher the degree of correlation. When H ═ max { H1,H2,H3,H4Then the closer the normal direction of the initial center point is to the direction of the kth template.
Assuming that the gray value of the pixel point (a, b) in the normal direction at the extracted initial center point p (u, v) is I (a, b), the distance from the point to the point p can be calculated by:
Figure BDA0003111373870000171
in the formula, the positive and negative directions of d are judged by the positive and negative directions of the normal of p, and the signs of the positive and negative directions are the same.
The width of the light bar varies, depending on the surface roughness of the object, and thus the width of the light bar in the direction normal to the initial center point at different positions is also different. Therefore, the width of the pixels of the light bars is adaptively changed, and the center extraction precision can be improved. Each initial center point is searched in its normal direction until a light bar edge is detected. And then calculating the distances between all pixel points and the initial central point in the normal direction of the initial central point of the light strip, and taking the gray value of each point in the normal direction as a weight value. And then accurately extracting by using a gray gravity center method, wherein the offset distance between the extracted p' point and the crude extracted p point is as follows:
Figure BDA0003111373870000172
where L is the pixel width of the light bar in the normal direction from the initial center point p and varies with the point p. I (a, b) is the gray value, i.e. the weight value, of the pixel point (a, b) in the normal direction.
The exact center coordinate extracted on the normal is p' (u)p,vp):
Figure BDA0003111373870000173
In the formula, θ is an angle between a vector in the normal direction and the X coordinate axis. The light bar center coordinates with different widths can be calculated in a self-adaptive mode through the formula.
Results of the experiment
At present, the extraction accuracy of the fringe centers is not a uniform verification standard, so in order to check the effect of the above algorithm, the center of different types of light stripes is extracted by using an improved algorithm, as shown in fig. 19 to 21. Various conventional center extraction algorithms are then used to compare with the above-described modified algorithms to verify the extraction effectiveness of the algorithms herein. And selecting a 1280 × 960 curved light bar image for center extraction, wherein the images 22-25 are respectively an extraction effect graph of each algorithm.
In order to test the efficiency of each algorithm, the present embodiment measured the running time of extracting the center coordinates of the curved light bar using a program, and the following table shows the running time of each algorithm.
Figure BDA0003111373870000174
As can be seen from the figure, the algorithm herein can extract the centers of different types of light bars accurately and completely. When different algorithms are used for center extraction of curved light bars (fig. 21-24), the extreme method (fig. 22) can be found to have unstable extraction and low precision; the center coordinate extracted by the traditional gray scale gravity center method (figure 23) has loss phenomenon and incomplete information; both the improved center extraction algorithm of the gray scale gravity center method (figure 21) and the improved Steger method (figure 24) can accurately extract along with the trend of the light bars, and the phenomena of center coordinate loss, incompleteness and the like do not occur. But the running time of the improved gray scale gravity center method is greatly reduced compared with that of the Steger method, and the running efficiency is improved. The accuracy and the efficiency of the improved algorithm are greatly improved compared with those of the traditional center extraction algorithm, and the use of the system is met.
Multi-view point cloud processing
Because the line structured light can only acquire the point cloud data of one surface of the forging in each scanning, the scanning measurement of multiple visual angles is needed to be carried out to obtain the complete three-dimensional point cloud data of the forging. The point clouds under different viewing angles depend on the common part to carry out data alignment, so that the point clouds are correctly matched with the common part of each point cloud to obtain complete point cloud data. The point cloud processing is shown in fig. 25.
Point cloud preprocessing
The center extraction algorithm is to obtain two-dimensional sub-pixel coordinates with accurate light bar centers, and the conversion to the three-dimensional coordinates can be realized through a model established in the early stage, so that the three-dimensional point cloud data of the surface of the measured object is obtained. The roughness of the surface of the measured object, the illumination environment and the measuring equipment can generate errors to a certain extent in the process of collecting images, and the errors can cause 'bad points' to appear in the point cloud data, so that the point cloud needs to be subjected to smooth denoising before splicing. Due to the fact that the number of point clouds obtained through multiple scanning is large, splicing efficiency is reduced, and therefore point clouds need to be thinned. Therefore, the point cloud is preprocessed before measurement, so that the quality of the point cloud can be improved, and the precision of three-dimensional measurement can be improved.
Point cloud denoising
The outlier noise does not affect the overall profile model of the measured object, but false information is caused to cause errors in three-dimensional measurement, so that the point cloud is denoised, false points in the point cloud can be effectively reduced, and the accuracy of reconstruction and measurement results is improved. The normal vector and curvature change of the sampling point can be obtained by evaluating the local point cloud characteristics, so that the point cloud is denoised and separated. However, the algorithm has long running time, low efficiency and complex operation, so that the calculation result is easy to make mistakes. Therefore, a statistical method is selected to denoise the point cloud, a threshold value is set firstly, the distance between a point in the threshold value range and the nearby points around the point is averaged, and the average value is calculated to obtain a result in Gaussian distribution. If a certain point is more distant from the sampling point than the average distance, it is considered that the noise point needs to be removed.
Point cloud sparsification
No matter what equipment is used for acquiring data, the obtained point cloud data are massive, and excessive data redundancy can cause overlarge data quantity and influence the later-stage point cloud splicing efficiency. Therefore, after the point cloud is denoised and smoothed, the point cloud needs to be thinned in order to reduce the data volume and improve the splicing efficiency.
And calculating the average distance between the point clouds in a specific space by using a sparsification method based on average distance points, and taking the average distance as a standard. In the dense point cloud area, the distance between points is smaller than a set standard; and in the sparse point cloud area, the distance between the points is larger than the set standard distance. Therefore, the thinning can be realized through the average distance between the point clouds to judge whether the data points need to be deleted or not.
Multi-view point cloud stitching
Multi-view point cloud data based on line structured light is typically acquired based on two relative motions. Firstly, the coordinate system is unchanged, and the measured object is placed according to different postures relative to the measuring equipment; secondly, the measured object is placed on the workbench, and the measuring equipment carries out multi-view scanning on the measured object. In any way, the same point on the measured object needs to be subjected to coordinate conversion relative to different reference coordinate systems, so that the coordinate systems are unified.
The current popular splicing method mainly comprises a mechanical absolute positioning splicing method, a multi-view label positioning splicing method and an ICP (inductively coupled plasma) splicing method.
The turntable method is the most common method in mechanical absolute positioning. The turntable method is characterized in that the relative positions of a camera coordinate system and a turntable coordinate system are unchanged, and the point cloud conversion only needs to determine the rotation angle through a rotation platform so as to obtain a rotation matrix, and the point cloud data under different visual angles can be spliced. The method is rapid, simple and efficient, but has higher requirements on hardware equipment such as turntable precision and the like.
In industry, for point cloud splicing of large objects, a multi-view label positioning and splicing algorithm is mostly adopted. The method generally comprises a fixed sphere method, a plane method, a point location method and the like, and the key point is that the matching error rate is minimized by using mark points. The marking points are generally circular, and are adhered to the surface of a measured object to collect point clouds at different viewing angles, so that positioning and splicing are realized.
ICP is the establishment of an error function E (R, t) between two coordinate systems by iteratively reducing the registration error to obtain the best matching point in the different coordinate systems. If the error function E (R, t) can be minimized, an optimal transformation matrix can be obtained. Although the precision of the ICP algorithm is high, when the point cloud density is large, pairing errors occur, and R and t obtained by solving are inaccurate. And the initial registration value or the corresponding point matching strategy is not properly selected, the algorithm may be trapped in a local minimum value and cannot be converged, and a correct result cannot be obtained.
For large-size industrial objects, marking points are generally added on the surface of the object, so that point cloud splicing of the object under each view angle is realized. The method is mainly characterized in that the spatial matching relation of point clouds under different visual angles is obtained through identification of information carried by the mark points. To facilitate the extraction of the center of the circle, a mark point with a concentric circle of 10mm outside the inner 6mm is used, and the shape of the mark point is shown in fig. 26.
The splicing principle based on the mark points is that the mark points are introduced and are pasted on the surface of a measured object or pasted in a background, the edges of the mark points are extracted through an image processing method, and the circle center coordinates of the mark points are obtained through fitting, so that the three-dimensional coordinates of the mark points under different visual angles are extracted. The mutual transformation relation of the coordinates of the reference points under different viewing angles is obtained through calculation, so that the point cloud of the measured object under a plurality of viewing angles is spliced, namely, the transformation of two coordinate systems is realized by utilizing a transformation matrix between the coordinates of the reference points, the calculation of a rotation matrix R and a translational vector T is realized, and the point cloud is spliced to obtain complete object model information.
The mathematical model for multi-view stitching based on marker points is shown in fig. 27. A. B and C are common marker points at different viewing angles. During measurement, the mark points can be acquired by measurement at different viewing angles, and the coordinates of the circle center of the mark points are obtained by image processing. The conversion between the two coordinate systems can be realized as long as more than or equal to 3 common reference points exist in the measurement.
Building method based on least square method splicing theory
The marking points are randomly distributed in the overlapping area of the two visual angles, and the point sets of the marking points are assumed to be respectively:
M={mi|mi∈M,i=1,2,3,...,k}
N={ni|ni∈N,i=1,2,3,...,k}
obtaining a matrix A by calculating the distance between any two points in the M point set:
Figure BDA0003111373870000201
therefore, a distance matrix B composed of any two points in the point set N can also be obtained. Since the index points are fixed, the mutual positions of the index points do not change at different viewing angles. The following conditions need to be satisfied:
(1) if the difference between the two distance values does not exceed α, the two distance values are considered equal.
(2) And if the number of the distance values alpha is not exceeded by N (N is the number of the mark points), the mark points under the two fields of view are considered to be corresponding matching points.
Through the above principle, the point sets of the marked points under different viewing angles are respectively S ═ Si|siE.s, i ═ 1,2,3i|tiE, T, i ═ 1,2, 3. And satisfying the following formula to reach the minimum value of the target function, thus obtaining transformation matrixes R and T between the three-dimensional data under different visual angles and completing the point cloud matching.
Figure BDA0003111373870000202
To solve the transformation matrices R and T, a singular value decomposition method based on a three-point matrix is used herein. Set of points for marker points under different fields of view S ═ Si|siE.s, i ═ 1,2,3i|tiE.g., T, i ═ 1,2, 3. If the coordinate values of the three points obtained under the first visual angle are respectively s1、s2、s3(ii) a The coordinates of the three points measured at the second visual angle are respectively t1、t2、t3The transformation relationship through the matrix can express the mutual transformation relationship.
Figure BDA0003111373870000203
Wherein
Figure BDA0003111373870000204
For the transformed point set, R and T are rotation and translation matrices.
R=[v]-1[w] (5.3)
T=t1-s1[v]-1[w] (5.4)
Substituting (5.3) and (5.4) into (5.2) to obtain the coordinate transformation relation as follows:
Figure BDA0003111373870000205
w and v in the formula are unit orthogonal coordinate vectors composed of the mark points.
Because of the error in the measuring system, the error is substituted into
Figure BDA0003111373870000206
Wherein deltaiIs the measurement error of the system. The objective function based on the least squares method is as follows:
Figure BDA0003111373870000211
Figure BDA0003111373870000212
Figure BDA0003111373870000213
calculating s 'and t', i ═ 1, 2.
s′i=si-s (5.9)
Figure BDA0003111373870000214
Substituting equations (5.9) and (5.10) into (5.6) yields the objective function as:
Figure BDA0003111373870000215
the rotation matrix R can be converted into the minimum value K of the function of solving the least square method through the objective function, and the translation matrix T can be obtained through T-Rs conversion and solving. And matching the point clouds under different visual angles into the target point cloud after rigid transformation matrixes R and T under different visual angles are obtained.
The above description is only exemplary of the present invention and should not be taken as limiting, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A method for extracting the center of line structured light is characterized by comprising the following steps: firstly, filtering and threshold segmentation are carried out on the collected image with the linear light stripes of the linear structure, and the light stripe characteristics of the linear structure light are segmented; carrying out binarization processing on each pixel point in the optical strip characteristics, and preliminarily extracting an initial central point p (u, v); taking the extracted initial central points as centers, and determining the normal direction of each initial central point by adopting a direction template method; and (4) performing midpoint extraction in the normal direction of each initial central point by adopting a gray scale gravity center method, and extracting an accurate central point.
2. The method for extracting the center of line structured light according to claim 1, wherein the initial center point p (u, v) is extracted by the following steps:
carrying out binarization processing on each pixel point in the light bar characteristics, comparing the gray value of each pixel point with a threshold value T extracted from the light bar characteristics, and marking the pixel points larger than the threshold value T as 1 or marking the pixel points as 0; scanning each pixel point P1 point by point, if the pixel point P1 is marked as 1, 8 pixel points P adjacent to the pixel point P1 are scanned2,P3,...,P9The pixel point P is judged as follows2,P3,...,P9Along a pixel pointThe P1 is arranged in sequence in the clockwise direction, if the following formula is satisfied:
Figure FDA0003111373860000011
Figure FDA0003111373860000012
deleting the pixel point P1, and extracting the reserved pixel point P1 as an initial central point P (u, v), wherein N (P) is1) Is P2,P3,...,P9Number marked 1, S (P)1) Represents P2,P3,...,P9,P2The cumulative number of times the tags in the sequence satisfy the 0-1 ordering.
3. The method for extracting the center of line structured light according to claim 1, wherein the step of determining the normal direction of the initial center point is as follows:
taking the extracted initial central point as a center, and performing the following operation on the gray level image of the line structured light strip:
Figure FDA0003111373860000013
in the formula, TkSetting k as 1,2,3 and 4, setting m as the total row number of the template matrix, setting n as the total column number of the template matrix, and setting u and v as the abscissa and the ordinate of the initial central point p in the pixel coordinate system respectively; c (u, v) represents the pixel gray value of the initial center point, Tk(i, j) represents the values of i rows and j columns on the corresponding template matrix, i and j being the number of rows and columns, respectively, wherein,
Figure FDA0003111373860000021
Figure FDA0003111373860000022
Figure FDA0003111373860000023
Figure FDA0003111373860000024
T1、T2、T3and T4Respectively representing template matrixes which are vertical, horizontal, 45 degrees inclined to the left and 45 degrees inclined to the right with the light stripes, and selecting H for each initial central pointkAnd (u, v) taking the trend of the template matrix corresponding to the maximum value in the (u, v) as the normal direction of the initial central point.
4. The method for extracting the center of line structured light according to claim 1, wherein after the normal direction of the initial center point is determined, the precise center point is extracted by the following steps:
searching each initial central point in the normal direction until the edge of the light strip is detected, calculating the distance d between each pixel point and the initial central point in the normal direction of the initial central point of the light strip, taking the gray value of each point in the normal direction as a weight value, and extracting the central point by using a gray gravity center method, wherein the offset distance between the extracted p' point and the pre-extracted p point is as follows:
Figure FDA0003111373860000031
wherein, L is the pixel width of the initial central point p in the normal direction, I (a, b) is the gray value of the pixel point (a, b) in the normal direction, a and b are the row number and column number of the pixel point in the two-dimensional matrix of the pixel coordinate system, respectively;
the exact center coordinate extracted on the normal is p' (u)p,vp) Satisfies the following conditions:
Figure FDA0003111373860000032
in the formula, θ is an angle between a vector in the normal direction and the X coordinate axis.
5. A three-dimensional measurement method for a forged piece is characterized in that during measurement, line structured light is adopted to scan the surface of the forged piece, the center extraction method of the line structured light as claimed in any one of claims 1 to 5 is adopted, the center coordinate data of each line structured light is extracted, complete three-dimensional point cloud data is obtained, and measurement is completed.
CN202110652167.6A 2021-06-11 2021-06-11 Center extraction method of line structured light and three-dimensional measurement method of forge piece Pending CN113324478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110652167.6A CN113324478A (en) 2021-06-11 2021-06-11 Center extraction method of line structured light and three-dimensional measurement method of forge piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110652167.6A CN113324478A (en) 2021-06-11 2021-06-11 Center extraction method of line structured light and three-dimensional measurement method of forge piece

Publications (1)

Publication Number Publication Date
CN113324478A true CN113324478A (en) 2021-08-31

Family

ID=77420515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110652167.6A Pending CN113324478A (en) 2021-06-11 2021-06-11 Center extraction method of line structured light and three-dimensional measurement method of forge piece

Country Status (1)

Country Link
CN (1) CN113324478A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114648526A (en) * 2022-05-18 2022-06-21 武汉精立电子技术有限公司 Image dead pixel processing method, storage medium, electronic device and system
CN115953459A (en) * 2023-03-10 2023-04-11 齐鲁工业大学(山东省科学院) Method for extracting laser stripe center line under complex illumination condition
CN116681892A (en) * 2023-06-02 2023-09-01 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement
CN116817796A (en) * 2023-08-23 2023-09-29 武汉工程大学 Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses
WO2024055788A1 (en) * 2022-09-15 2024-03-21 珠海一微半导体股份有限公司 Laser positioning method based on image informaton, and robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4135959A1 (en) * 1991-10-31 1993-05-06 Leica Heerbrugg Ag, Heerbrugg, Ch METHOD FOR MEASURING THE SLOPES OF INTERFACE IN AN OPTICAL SYSTEM
CN101285732A (en) * 2008-05-28 2008-10-15 中国科学院光电技术研究所 Heavy caliber paraboloidal mirror checking system
CN104897174A (en) * 2015-06-19 2015-09-09 大连理工大学 Image light stripe noise suppression method based on confidence evaluation
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
CN109344702A (en) * 2018-08-23 2019-02-15 北京华捷艾米科技有限公司 Pedestrian detection method and device based on depth image and color image
CN110615016A (en) * 2019-10-11 2019-12-27 中国铁道科学研究院集团有限公司 Calibration method and verification method of steel rail profile and abrasion detection system
CN112241964A (en) * 2020-09-22 2021-01-19 天津大学 Light strip center extraction method for line structured light non-contact measurement

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4135959A1 (en) * 1991-10-31 1993-05-06 Leica Heerbrugg Ag, Heerbrugg, Ch METHOD FOR MEASURING THE SLOPES OF INTERFACE IN AN OPTICAL SYSTEM
CN101285732A (en) * 2008-05-28 2008-10-15 中国科学院光电技术研究所 Heavy caliber paraboloidal mirror checking system
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
CN104897174A (en) * 2015-06-19 2015-09-09 大连理工大学 Image light stripe noise suppression method based on confidence evaluation
CN109344702A (en) * 2018-08-23 2019-02-15 北京华捷艾米科技有限公司 Pedestrian detection method and device based on depth image and color image
CN110615016A (en) * 2019-10-11 2019-12-27 中国铁道科学研究院集团有限公司 Calibration method and verification method of steel rail profile and abrasion detection system
CN112241964A (en) * 2020-09-22 2021-01-19 天津大学 Light strip center extraction method for line structured light non-contact measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾超: "线结构光光条中心提取算法", 《中国图象图形学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113470056B (en) * 2021-09-06 2021-11-16 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114648526A (en) * 2022-05-18 2022-06-21 武汉精立电子技术有限公司 Image dead pixel processing method, storage medium, electronic device and system
WO2024055788A1 (en) * 2022-09-15 2024-03-21 珠海一微半导体股份有限公司 Laser positioning method based on image informaton, and robot
CN115953459A (en) * 2023-03-10 2023-04-11 齐鲁工业大学(山东省科学院) Method for extracting laser stripe center line under complex illumination condition
CN116681892A (en) * 2023-06-02 2023-09-01 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement
CN116681892B (en) * 2023-06-02 2024-01-26 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement
CN116817796A (en) * 2023-08-23 2023-09-29 武汉工程大学 Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses
CN116817796B (en) * 2023-08-23 2023-11-24 武汉工程大学 Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses

Similar Documents

Publication Publication Date Title
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN110497187B (en) Sun flower pattern assembly system based on visual guidance
CN107578464B (en) Conveyor belt workpiece three-dimensional contour measuring method based on line laser scanning
Xu et al. Line structured light calibration method and centerline extraction: A review
US9704232B2 (en) Stereo vision measurement system and method
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN110763204B (en) Planar coding target and pose measurement method thereof
CN114494045A (en) Large-scale straight gear geometric parameter measuring system and method based on machine vision
CN114252449B (en) Aluminum alloy weld joint surface quality detection system and method based on line structured light
JP2021168143A (en) System and method for efficiently scoring probe in image by vision system
WO2022126870A1 (en) Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line
CN111402411A (en) Scattered object identification and grabbing method based on line structured light
CN111402330A (en) Laser line key point extraction method based on plane target
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN113674218A (en) Weld characteristic point extraction method and device, electronic equipment and storage medium
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
CN115953550A (en) Point cloud outlier rejection system and method for line structured light scanning
CN114963981B (en) Cylindrical part butt joint non-contact measurement method based on monocular vision
CN113970560B (en) Defect three-dimensional detection method based on multi-sensor fusion
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
CN114926531A (en) Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN114485433A (en) Three-dimensional measurement system, method and device based on pseudo-random speckles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210831