CN114387228A - A method of automatic workpiece positioning and alignment based on line structured light - Google Patents

A method of automatic workpiece positioning and alignment based on line structured light Download PDF

Info

Publication number
CN114387228A
CN114387228A CN202111595872.3A CN202111595872A CN114387228A CN 114387228 A CN114387228 A CN 114387228A CN 202111595872 A CN202111595872 A CN 202111595872A CN 114387228 A CN114387228 A CN 114387228A
Authority
CN
China
Prior art keywords
image
coordinates
workpiece
camera
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111595872.3A
Other languages
Chinese (zh)
Inventor
梅雪松
运侠伦
吴卓成
李晓
赵亮
梅岭南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Chaotong Intelligent Manufacturing Technology Research Institute Co ltd
Original Assignee
Wuxi Chaotong Intelligent Manufacturing Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Chaotong Intelligent Manufacturing Technology Research Institute Co ltd filed Critical Wuxi Chaotong Intelligent Manufacturing Technology Research Institute Co ltd
Priority to CN202111595872.3A priority Critical patent/CN114387228A/en
Publication of CN114387228A publication Critical patent/CN114387228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及电子检测方法领域,特别涉及一种基于线结构光的工件自动寻位找正方法。包括以下步骤:S1.固定工件;S2.照射线结构光;S3.获取光条图像;S4.图像处理;S5.获取光条上点的三维坐标;S6.计算与理想位置间的偏差。本发明加工效率高、人工成本低、操作灵活、加工精度高,可有效缩短加工周期,适用于单件小批量模式,能更好地满足当前市场需求。

Figure 202111595872

The invention relates to the field of electronic detection methods, in particular to a method for automatic positioning and alignment of workpieces based on line structured light. It includes the following steps: S1. Fixing the workpiece; S2. Irradiating the line structured light; S3. Obtaining the light bar image; S4. Image processing; S5. Obtaining the three-dimensional coordinates of the point on the light bar; S6. The invention has high processing efficiency, low labor cost, flexible operation and high processing precision, can effectively shorten the processing cycle, is suitable for single-piece and small-batch mode, and can better meet the current market demand.

Figure 202111595872

Description

一种基于线结构光的工件自动寻位找正方法A method of automatic workpiece positioning and alignment based on line structured light

技术领域technical field

本发明涉及电子检测方法领域,特别涉及一种基于线结构光的工件自动寻位找正方法。The invention relates to the field of electronic detection methods, in particular to a method for automatic positioning and alignment of workpieces based on line structured light.

背景技术Background technique

机械加工领域中,工件定位是工件加工过程的重要一步。工件定位之后进行工件夹紧、工件加工操作,其影响着工件加工整个过程的效率以及工件加工精度。在大批量生产模式中,可采用专用高精度治具。但单件小批量工件加工中,仍主要靠操作者进行人工找正。在这种情况下,加工精度大大降低,加工效率严重下降,人工成本大大上升,且降低了加工中心的利用率。因此,此传统方法不能很好的满足当前市场需求。因此,本专利提出一种自动寻位找正方法。不同于传统工件定位方法,自动寻位找正方法无需对工件的初始姿态进行精确对正,而采取主动寻位并调整位姿来代替被动定位。自动寻位找正方法加工效率高、人工成本低、操作灵活、加工精度高,可有效缩短加工周期,适用于单件小批量模式。In the field of machining, workpiece positioning is an important step in the workpiece machining process. After the workpiece is positioned, the workpiece clamping and workpiece machining operations are performed, which affect the efficiency of the entire workpiece machining process and the machining accuracy of the workpiece. In mass production mode, special high-precision fixtures can be used. However, in the processing of single and small batch workpieces, manual alignment is still mainly performed by the operator. In this case, the machining accuracy is greatly reduced, the machining efficiency is seriously reduced, the labor cost is greatly increased, and the utilization rate of the machining center is reduced. Therefore, this traditional method cannot well meet the current market demands. Therefore, this patent proposes an automatic alignment method. Different from the traditional workpiece positioning method, the automatic positioning and alignment method does not need to accurately align the initial posture of the workpiece, and adopts active positioning and adjustment of the pose instead of passive positioning. The automatic positioning and alignment method has high processing efficiency, low labor cost, flexible operation and high processing accuracy, which can effectively shorten the processing cycle and is suitable for single-piece small batch mode.

发明内容SUMMARY OF THE INVENTION

本发明的目的是克服现有技术存在的缺陷,提供了一种基于线结构光的工件自动寻位找正方法,可用于卧式加工中心上工件的自动寻位与位姿调整。The purpose of the present invention is to overcome the defects of the prior art, and to provide a method for automatic positioning and alignment of workpieces based on linear structured light, which can be used for automatic positioning and pose adjustment of workpieces on horizontal machining centers.

实现本发明目的的技术方案是:The technical scheme that realizes the object of the present invention is:

一种基于线结构光的工件自动寻位找正方法,包括以下步骤:A method for automatic workpiece positioning and alignment based on line structured light, comprising the following steps:

S1.固定工件:将工件用夹具或固定元件安装在卧式加工中心的回转工作台上;S1. Fix the workpiece: install the workpiece on the rotary table of the horizontal machining center with a fixture or a fixing element;

S2.照射线结构光:使用激光器将线结构光照射于工件表面,此时工件表面出现亮光条纹;S2. Irradiate line structured light: use a laser to irradiate the line structured light on the surface of the workpiece, and bright stripes appear on the surface of the workpiece at this time;

S3.获取光条图像:用CCD相机对光条进行图像采集,获得光条的二维图像;S3. Acquire the light bar image: use a CCD camera to collect the light bar image to obtain a two-dimensional image of the light bar;

S4.图像处理:对所获得的图像进行图像滤波、图像分割、轮廓提取处理;S4. Image processing: perform image filtering, image segmentation, and contour extraction processing on the obtained image;

S5.获取光条上点的三维坐标:通过CCD相机标定可将二维图像坐标转化为三维实际坐标;S5. Obtain the three-dimensional coordinates of the points on the light bar: The two-dimensional image coordinates can be converted into three-dimensional actual coordinates by CCD camera calibration;

S6.计算与理想位置间的偏差:三维实际坐标可以对工件在加工中心上的位置精确描述,与其正确位置比对可获得包括旋转量与平移量的位置偏差;S6. Deviation between calculation and ideal position: The three-dimensional actual coordinates can accurately describe the position of the workpiece on the machining center, and the position deviation including rotation and translation can be obtained by comparing with its correct position;

S7.调整工件位姿:通过回转工作台将工件调整到正确的位置。S7. Adjust the pose of the workpiece: adjust the workpiece to the correct position through the rotary table.

进一步的,所述S3中,CCD相机标定,获得三维坐标与二维坐标间的转换关系,CCD相机包括左、右相机,除了相机内外参数的标定外,还需要进行两个相机间位置关系的标定,首先将相机标定靶置于两相机视野范围内,设左、右相机坐标系与世界坐标系之间的平移量为T1、T2,旋转矩阵为R1、R2,标定靶上某标定点在左相机坐标系下坐标为Pl=(Xcl,Ycl,Zcl)T,在右相机坐标系下坐标为Pr=(Xcr,Ycr,Zcr)T,在世界坐标系下坐标为P=(Xw,Yw,Zw)T;则有:Further, in the S3, the CCD camera is calibrated to obtain the conversion relationship between the three-dimensional coordinates and the two-dimensional coordinates. The CCD camera includes left and right cameras. In addition to the calibration of the internal and external parameters of the camera, it is also necessary to perform a positional relationship between the two cameras. To calibrate, first place the camera calibration target within the field of view of the two cameras, set the translation between the left and right camera coordinate systems and the world coordinate system as T 1 , T 2 , and the rotation matrices as R 1 , R 2 , on the calibration target The coordinates of a calibration point in the left camera coordinate system are P l = (X cl , Y cl , Z cl ) T , and the coordinates in the right camera coordinate system are Pr = (X cr , Y cr , Z cr ) T , in the world The coordinates in the coordinate system are P=( Xw ,Yw, Zw ) T ; then there are:

Figure BDA0003431101900000021
Figure BDA0003431101900000021

可以得到can get

Figure BDA0003431101900000022
Figure BDA0003431101900000022

其中,

Figure BDA0003431101900000023
in,
Figure BDA0003431101900000023

通过上式,可以得到左、右相机坐标系间的转换关系;Through the above formula, the conversion relationship between the left and right camera coordinate systems can be obtained;

两相机的光轴平行,光轴与图像平面垂直,两相机间距为d;Cl、Cr为左、右相机坐标系,设三维点P在左相机坐标系下为(Xcl,Ycl,Zcl),在右相机坐标系下为(Xcr,Ycr,Zcr),像素坐标系与图像坐标系间关系为:The optical axes of the two cameras are parallel, the optical axes are perpendicular to the image plane, and the distance between the two cameras is d; C l and C r are the left and right camera coordinate systems, and the three-dimensional point P in the left camera coordinate system is (X cl , Y cl ) , Z cl ), in the right camera coordinate system (X cr , Y cr , Z cr ), the relationship between the pixel coordinate system and the image coordinate system is:

Figure BDA0003431101900000031
Figure BDA0003431101900000031

其中u0、v0为相机内参,(ul,vl)、(ur,vr)为P在左、右像素坐标系中的坐标,(xl,yl)、(xr,yr)为P在图像坐标系中的坐标;Where u0, v0 are camera internal parameters, (u l , v l ), (u r , v r ) are the coordinates of P in the left and right pixel coordinate systems, (x l , y l ), (x r , y r ) is the coordinate of P in the image coordinate system;

结合图像坐标系与相机坐标系的关系,可得:Combining the relationship between the image coordinate system and the camera coordinate system, we can get:

Figure BDA0003431101900000032
Figure BDA0003431101900000032

其中,f为相机焦距,α=f/dx,β=f/dy是相机内参,通过上式便可以实现图像坐标系与世界坐标系的坐标转换。Among them, f is the focal length of the camera, α=f/dx, and β=f/dy are the internal parameters of the camera, and the coordinate conversion between the image coordinate system and the world coordinate system can be realized by the above formula.

进一步的,所述S4中,获取光条后,采用灰度重心法进行光条中心提取,灰度重心法的计算公式为:Further, in the S4, after the light bar is obtained, the center of the light bar is extracted by the gray-scale centroid method, and the calculation formula of the gray-scale centroid method is:

Figure BDA0003431101900000033
Figure BDA0003431101900000033

其中(Xi,Yi)为光条平面中某一像素点的坐标,相应像素点的灰度值为I(Xi,Yi),(i=1,2,…,N),N为截面内像素点的个数,(Xc,Yc)为所求光条中心坐标。至此,可以求出光条中心点坐标。where (X i ,Y i ) is the coordinate of a pixel in the light bar plane, and the grayscale value of the corresponding pixel is I(X i ,Y i ),(i=1,2,...,N), N is the number of pixels in the section, and (X c , Y c ) is the center coordinate of the desired light bar. So far, the coordinates of the center point of the light bar can be obtained.

进一步的,所述S6中,将其投射于Y=0的平面,设该投影上任意两点Ai,Bj的坐标分别为Ai(XAi,ZAi),Bj(XBj,ZBj);过A做线段AiCi,j平行于Xw轴,过Bj做线段BjCi,j平行于Zw轴,AiCi,j与BiCi,j相交于Ci,j,则Ci,j坐标为Ci,j(XBj,ZAi),并有AiCi,j与AiBj的夹角αi,j;则有:Further, in the S6, it is projected on the plane of Y=0, and the coordinates of any two points A i and B j on the projection are respectively A i (X Ai , Z Ai ), Bj (X Bj , Z Bj ); make line segment A i C i,j parallel to X w axis through A, make line segment B j C i,j through B j parallel to Z w axis, A i C i ,j intersect B i C i,j For C i,j , the coordinates of C i, j are C i,j (X Bj ,Z Ai ), and there is an angle α i, j between A i C i,j and A i B j ; then there are:

Figure BDA0003431101900000041
Figure BDA0003431101900000041

可以得到:You can get:

Figure BDA0003431101900000042
Figure BDA0003431101900000042

最终所需旋转角度为:The final required rotation angle is:

Figure BDA0003431101900000043
Figure BDA0003431101900000043

其中,N为所获得的坐标点数;Among them, N is the number of coordinate points obtained;

设所测得的坐标点为(xi,yi),正确位置相应坐标点为(Xi,Yi),则位置偏差的平移量为:Assuming that the measured coordinate point is (xi,yi), and the corresponding coordinate point of the correct position is (Xi,Yi), the translation amount of the position deviation is:

Figure BDA0003431101900000044
Figure BDA0003431101900000044

采用上述技术方案后,本发明具有以下积极的效果:After adopting above-mentioned technical scheme, the present invention has following positive effect:

(1)本发明加工效率高、人工成本低、操作灵活、加工精度高,可有效缩短加工周期,适用于单件小批量模式,能更好地满足当前市场需求。(1) The invention has high processing efficiency, low labor cost, flexible operation and high processing precision, can effectively shorten the processing cycle, is suitable for single-piece and small-batch mode, and can better meet the current market demand.

附图说明Description of drawings

为了使本发明的内容更容易被清楚地理解,下面根据具体实施例并结合附图,对本发明作进一步详细的说明,其中In order to make the content of the present invention easier to understand clearly, the present invention will be described in further detail below according to specific embodiments and in conjunction with the accompanying drawings, wherein

图1为工作流程图;Figure 1 is a work flow chart;

图2为寻位系统图;Figure 2 is a diagram of a positioning system;

图3为相机系统模型图;Figure 3 is a model diagram of the camera system;

图4为旋转角度示意图。FIG. 4 is a schematic diagram of the rotation angle.

具体实施方式Detailed ways

一种基于线结构光的工件自动寻位找正方法,包括以下步骤:A method for automatic workpiece positioning and alignment based on line structured light, comprising the following steps:

S1.固定工件:将工件用夹具或固定元件安装在卧式加工中心的回转工作台上;S1. Fix the workpiece: install the workpiece on the rotary table of the horizontal machining center with a fixture or a fixing element;

S2.照射线结构光:使用激光器将线结构光照射于工件表面,此时工件表面出现亮光条纹;S2. Irradiate line structured light: use a laser to irradiate the line structured light on the surface of the workpiece, and bright stripes appear on the surface of the workpiece at this time;

S3.获取光条图像:用CCD相机对光条进行图像采集,获得光条的二维图像;S3. Acquire the light bar image: use a CCD camera to collect the light bar image to obtain a two-dimensional image of the light bar;

S4.图像处理:对所获得的图像进行图像滤波、图像分割、轮廓提取等处理;S4. Image processing: perform image filtering, image segmentation, contour extraction and other processing on the obtained image;

S5.获取光条上点的三维坐标:通过CCD相机标定可将二维图像坐标转化为三维实际坐标;S5. Obtain the three-dimensional coordinates of the points on the light bar: The two-dimensional image coordinates can be converted into three-dimensional actual coordinates by CCD camera calibration;

S6.计算与理想位置间的偏差:三维实际坐标可以对工件在加工中心上的位置精确描述,与其正确位置比对可获得包括旋转量与平移量的位置偏差;S6. Deviation between calculation and ideal position: The three-dimensional actual coordinates can accurately describe the position of the workpiece on the machining center, and the position deviation including rotation and translation can be obtained by comparing with its correct position;

S7.调整工件位姿:通过回转工作台将工件调整到正确的位置。S7. Adjust the pose of the workpiece: adjust the workpiece to the correct position through the rotary table.

CCD相机的工件表面三维图像信息获取采用双目线结构光测量技术,测量系统由激光器、CCD相机与计算机系统组成。CCD相机包括左、右相机。The three-dimensional image information of the workpiece surface of the CCD camera is obtained by binocular structured light measurement technology. The measurement system is composed of a laser, a CCD camera and a computer system. CCD cameras include left and right cameras.

软件部分由结构光扫描控制、图像处理、轮廓提取等组成。当激光器将线结构光投射于工件表面时,在工件表面上会形成光条,光条包含了所测表面的三维坐标信息。进一步,寻位系统采用两台从不同方位拍摄工件表面的CCD相机来模仿人眼进而获得工件表面的三维信息,从而获得工件表面多点的坐标信息。光条上的三维坐标信息在相机的转化后变成二维图像坐标系信息。在这一步中,若要获得实际三维坐标与图像二维坐标之间的转换关系,对相机的标定必不可少。通过相机标定得到的参数矩阵,可以实现三维坐标与二维坐标的转换。The software part consists of structured light scanning control, image processing, contour extraction and so on. When the laser projects the line structured light on the surface of the workpiece, a light bar will be formed on the surface of the workpiece, and the light bar contains the three-dimensional coordinate information of the measured surface. Further, the locating system uses two CCD cameras that shoot the workpiece surface from different directions to imitate the human eye to obtain the three-dimensional information of the workpiece surface, thereby obtaining the coordinate information of multiple points on the workpiece surface. The three-dimensional coordinate information on the light bar becomes the two-dimensional image coordinate system information after the transformation of the camera. In this step, to obtain the conversion relationship between the actual three-dimensional coordinates and the two-dimensional coordinates of the image, the calibration of the camera is essential. The parameter matrix obtained by the camera calibration can realize the conversion of three-dimensional coordinates and two-dimensional coordinates.

对所获得的含坐标信息的光条图像,进行图像预处理并提取光条中心。要得到高质量的清晰完整照片,必须对原始图像进行滤波、图像分割等预处理从而提高精度。进而对图像进行边缘检测、特征点提取从而获得所需光条条纹中心信息。For the obtained light bar image with coordinate information, image preprocessing is performed and the center of the light bar is extracted. In order to obtain high-quality clear and complete photos, the original image must be preprocessed such as filtering and image segmentation to improve the accuracy. Then, edge detection and feature point extraction are performed on the image to obtain the required center information of the light stripe.

对所获得的条纹中心坐标,转化为三维世界世界坐标,并通过一定数学运算计算出其与正确位置之间的位置偏差。此位置偏差具体包括旋转量和位移量。The obtained fringe center coordinates are converted into three-dimensional world coordinates, and the position deviation from the correct position is calculated through certain mathematical operations. This positional deviation specifically includes a rotation amount and a displacement amount.

实施例1Example 1

本方法的具体工作流程如图1所示,寻位系统如图2所示。首先,将工件用夹具或固定元件安装在卧式加工中心的回转工作台上,无需将工件摆放于其正确位置。使用激光器将线结构光照射于工件表面,此时工件表面出现亮光条纹。光条上的点包含了工件上物点的三维坐标信息。用CCD相机对光条进行图像采集,获得光条的二维图像。对所获得的图像进行图像滤波、图像分割、轮廓提取等处理,通过相机标定得到的矩阵可将二维图像坐标转化为三维实际坐标。一系列的三维坐标可以对工件在加工中心上的位置精确描述,与其正确位置比对可获得包括旋转量与平移量的位置偏差。最后通过回转工作台将工件调整到正确的位置。The specific work flow of the method is shown in Figure 1, and the positioning system is shown in Figure 2. First, the workpiece fixtures or fixing elements are mounted on the rotary table of the horizontal machining center without placing the workpiece in its correct position. A laser is used to irradiate the surface of the workpiece with linear structured light, and bright streaks appear on the surface of the workpiece. The point on the light bar contains the three-dimensional coordinate information of the object point on the workpiece. A CCD camera is used for image acquisition of the light bar to obtain a two-dimensional image of the light bar. The obtained image is processed by image filtering, image segmentation, contour extraction, etc. The matrix obtained by the camera calibration can convert the two-dimensional image coordinates into three-dimensional actual coordinates. A series of three-dimensional coordinates can accurately describe the position of the workpiece on the machining center, and the position deviation including rotation and translation can be obtained by comparing with its correct position. Finally, the workpiece is adjusted to the correct position by the rotary table.

相机标定的目的是为了获得三维坐标与二维坐标间的转换关系,除了相机内外参数的标定外,还需要进行两个相机间位置关系的标定。首先将相机标定靶置于两相机视野范围内。设左、右相机坐标系与世界坐标系之间的平移量为T1、T2,旋转矩阵为R1、R2。标定靶上某标定点在左相机坐标系下坐标为Pl=(Xcl,Ycl,Zcl)T,在右相机坐标系下坐标为Pr=(Xcr,Ycr,Zcr)T,在世界坐标系下坐标为P=(Xw,Yw,Zw)T。则有:The purpose of camera calibration is to obtain the conversion relationship between the three-dimensional coordinates and the two-dimensional coordinates. In addition to the calibration of the internal and external parameters of the camera, it is also necessary to calibrate the positional relationship between the two cameras. First, the camera calibration target is placed within the field of view of the two cameras. Let the translations between the left and right camera coordinate systems and the world coordinate system be T 1 , T 2 , and the rotation matrices be R 1 , R 2 . The coordinates of a calibration point on the calibration target are P l = (X cl , Y cl , Z cl ) T in the left camera coordinate system, and Pr = (X cr , Y cr , Z cr ) T in the right camera coordinate system , and the coordinates in the world coordinate system are P=(Xw, Y w , Z w ) T . Then there are:

Figure BDA0003431101900000061
Figure BDA0003431101900000061

可以得到can get

Figure BDA0003431101900000071
Figure BDA0003431101900000071

其中,

Figure BDA0003431101900000072
in,
Figure BDA0003431101900000072

通过上式,可以得到左、右相机坐标系间的转换关系。Through the above formula, the conversion relationship between the left and right camera coordinate systems can be obtained.

相机系统如图3所示,两相机的光轴平行,光轴与图像平面垂直。两相机间距为d。Cl、Cr为左、右相机坐标系。设三维点P在左相机坐标系下为(Xcl,Ycl,Zcl),在右相机坐标系下为(Xcr,Ycr,Zcr)。像素坐标系与图像坐标系间关系为:The camera system is shown in Figure 3. The optical axes of the two cameras are parallel, and the optical axes are perpendicular to the image plane. The distance between the two cameras is d. C l and C r are left and right camera coordinate systems. Let the three-dimensional point P be (X cl , Y cl , Z cl ) in the left camera coordinate system, and (X cr , Y cr , Z cr ) in the right camera coordinate system. The relationship between the pixel coordinate system and the image coordinate system is:

Figure BDA0003431101900000073
Figure BDA0003431101900000073

其中u0、v0为相机内参,(ul,vl)、(ur,vr)为P在左、右像素坐标系中的坐标,(xl,yl)、(xr,yr)为P在图像坐标系中的坐标。Where u0, v0 are camera internal parameters, (u l , v l ), (u r , v r ) are the coordinates of P in the left and right pixel coordinate systems, (x l , y l ), (x r , y r ) is the coordinate of P in the image coordinate system.

结合图像坐标系与相机坐标系的关系,可得:Combining the relationship between the image coordinate system and the camera coordinate system, we can get:

Figure BDA0003431101900000074
Figure BDA0003431101900000074

其中,f为相机焦距,α=f/dx,β=f/dy是相机内参。通过上式便可以实现图像坐标系与世界坐标系的坐标转换。Among them, f is the focal length of the camera, α=f/dx, and β=f/dy are the internal parameters of the camera. The coordinate conversion between the image coordinate system and the world coordinate system can be achieved through the above formula.

在CCD相机获得光条图像后,应当对所获图像进行预处理并且提取光条中心。其中,图像预处理主要包括对图像噪声等操作的消除使图像更好的反映真实光条,为后续光条中心提取做准备。此环节主要分为图像滤波、图像分割以及光条中心提取。After a light stripe image is acquired by the CCD camera, the acquired image should be preprocessed and the center of the light stripe extracted. Among them, the image preprocessing mainly includes the elimination of image noise and other operations to make the image better reflect the real light bar and prepare for the subsequent light bar center extraction. This part is mainly divided into image filtering, image segmentation and strip center extraction.

首先采用中值滤波消除脉冲噪声,同时又可以不对图像边缘信息造成影响。接着采用直方图处理消除图像中无关信息并且使图像中重要的特征信息突出,从而提高图像质量。接下来进行图像分割,其目的是图像中亮光区域进行提取以方便后续的特征提取。这里采用阈值分割方法。阈值分割主要是基于图像中目标与背景之间的灰度差异,选定合理的阈值,将图像中的像素点与所选阈值进行对比从而判断该点为目标还是背景。经图像分割后,可以得到清晰的光条图像,再进行图像细化操作,就可以将光条从图像中提取出来。采用灰度重心法进行光条中心提取。灰度重心法是将图像中的像素点的灰度分布之心作为光条中心点,以此可以降低光条灰度分布不均的误差。灰度重心法的计算公式为:Firstly, the median filter is used to eliminate the impulse noise without affecting the edge information of the image. Then use histogram processing to eliminate irrelevant information in the image and make the important feature information in the image stand out, so as to improve the image quality. Next, image segmentation is performed, the purpose of which is to extract bright areas in the image to facilitate subsequent feature extraction. The threshold segmentation method is used here. Threshold segmentation is mainly based on the grayscale difference between the target and the background in the image, select a reasonable threshold, and compare the pixels in the image with the selected threshold to determine whether the point is the target or the background. After image segmentation, a clear light bar image can be obtained, and then the image thinning operation can be performed to extract the light bar from the image. The center of light stripe was extracted by gray barycentric method. The gray-scale centroid method takes the center of the gray-scale distribution of the pixels in the image as the center point of the light-stripe, which can reduce the error of uneven gray-scale distribution of the light-stripe. The calculation formula of the gray-scale centroid method is:

Figure BDA0003431101900000081
Figure BDA0003431101900000081

其中(Xi,Yi)为光条平面中某一像素点的坐标,相应像素点的灰度值为I(Xi,Yi),(i=1,2,…,N),N为截面内像素点的个数,(Xc,Yc)为所求光条中心坐标。至此,可以求出光条中心点坐标。where (X i ,Y i ) is the coordinate of a pixel in the light bar plane, and the grayscale value of the corresponding pixel is I(X i ,Y i ),(i=1,2,...,N), N is the number of pixels in the section, and (X c , Y c ) is the center coordinate of the desired light bar. So far, the coordinates of the center point of the light bar can be obtained.

经上述步骤,可以得到工件表面坐标信息,接下来将计算与正确位置间的位置偏差。After the above steps, the workpiece surface coordinate information can be obtained, and the position deviation from the correct position will be calculated next.

如图4所示,通过上述步骤得到的工件表面坐标点,将其投射于Y=0的平面,设该投影上任意两点Ai,Bj的坐标分别为Ai(XAi,ZAi),Bj(XBj,ZBj)。过A做线段AiCi,j平行于Xw轴,过Bj做线段BjCi,j平行于Zw轴,AiCi,j与BiCi,j相交于Ci,j,则Ci,j坐标为Ci,j(XBj,ZAi),并有AiCi,j与AiBj的夹角αi,j。则有:As shown in Figure 4, the workpiece surface coordinate points obtained through the above steps are projected on the plane of Y=0, and the coordinates of any two points A i and B j on the projection are respectively A i (X Ai , Z Ai ) ), Bj(X Bj , Z Bj ). Make a line segment A i C i,j parallel to the X w axis through A, and make a line segment B j C i,j through B j parallel to the Z w axis, A i C i,j and B i C i,j intersect at C i ,j , then the coordinates of C i, j are C i,j (X Bj ,Z Ai ), and there is an angle α i, j between A i C i,j and A i B j . Then there are:

Figure BDA0003431101900000082
Figure BDA0003431101900000082

可以得到:You can get:

Figure BDA0003431101900000091
Figure BDA0003431101900000091

最终所需旋转角度为:The final required rotation angle is:

Figure BDA0003431101900000092
Figure BDA0003431101900000092

其中,N为所获得的坐标点数。Among them, N is the number of coordinate points obtained.

设所测得的坐标点为(xi,yi),正确位置相应坐标点为(Xi,Yi),则位置偏差的平移量为:Assuming that the measured coordinate point is (xi,yi), and the corresponding coordinate point of the correct position is (Xi,Yi), the translation amount of the position deviation is:

Figure BDA0003431101900000093
Figure BDA0003431101900000093

最后,将位置偏差发给加工中心,再通过回转工作台将工件置于正确位置。Finally, the position deviation is sent to the machining center, and the workpiece is placed in the correct position through the rotary table.

以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above further describe the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (4)

1.一种基于线结构光的工件自动寻位找正方法,其特征在于:包括以下步骤:1. a workpiece automatic locating method based on line structured light, is characterized in that: comprise the following steps: S1.固定工件:将工件用夹具或固定元件安装在卧式加工中心的回转工作台上;S1. Fix the workpiece: install the workpiece on the rotary table of the horizontal machining center with a fixture or a fixing element; S2.照射线结构光:使用激光器将线结构光照射于工件表面,此时工件表面出现亮光条纹;S2. Irradiate line structured light: use a laser to irradiate the line structured light on the surface of the workpiece, and bright stripes appear on the surface of the workpiece at this time; S3.获取光条图像:用CCD相机对光条进行图像采集,获得光条的二维图像;S3. Acquire the light bar image: use a CCD camera to collect the light bar image to obtain a two-dimensional image of the light bar; S4.图像处理:对所获得的图像进行图像滤波、图像分割、轮廓提取处理;S4. Image processing: perform image filtering, image segmentation, and contour extraction processing on the obtained image; S5.获取光条上点的三维坐标:通过CCD相机标定可将二维图像坐标转化为三维实际坐标;S5. Obtain the three-dimensional coordinates of the points on the light bar: The two-dimensional image coordinates can be converted into three-dimensional actual coordinates by CCD camera calibration; S6.计算与理想位置间的偏差:三维实际坐标可以对工件在加工中心上的位置精确描述,与其正确位置比对可获得包括旋转量与平移量的位置偏差;S6. Deviation between calculation and ideal position: The three-dimensional actual coordinates can accurately describe the position of the workpiece on the machining center, and the position deviation including rotation and translation can be obtained by comparing with its correct position; S7.调整工件位姿:通过回转工作台将工件调整到正确的位置。S7. Adjust the pose of the workpiece: adjust the workpiece to the correct position through the rotary table. 2.根据权利要求1所述的基于线结构光的工件自动寻位找正方法,其特征在于:所述S3中,CCD相机标定,获得三维坐标与二维坐标间的转换关系,CCD相机包括左、右相机,除了相机内外参数的标定外,还需要进行两个相机间位置关系的标定,首先将相机标定靶置于两相机视野范围内,设左、右相机坐标系与世界坐标系之间的平移量为T1、T2,旋转矩阵为R1、R2,标定靶上某标定点在左相机坐标系下坐标为Pl=(Xcl,Ycl,Zcl)T,在右相机坐标系下坐标为Pr=(Xcr,Ycr,Zcr)T,在世界坐标系下坐标为P=(Xw,Yw,Zw)T;则有:2. The workpiece automatic positioning and alignment method based on line structured light according to claim 1, is characterized in that: in the S3, the CCD camera is calibrated to obtain the conversion relationship between the three-dimensional coordinates and the two-dimensional coordinates, and the CCD camera comprises: For the left and right cameras, in addition to the calibration of the internal and external parameters of the camera, it is also necessary to calibrate the positional relationship between the two cameras. First, place the camera calibration target within the field of view of the two cameras, and set the left and right camera coordinate systems and the world coordinate system. The amount of translation between is T 1 , T 2 , the rotation matrices are R 1 , R 2 , the coordinates of a calibration point on the calibration target in the left camera coordinate system are P l =(X cl , Y cl , Z cl ) T , at The coordinates under the right camera coordinate system are Pr=(X cr , Y cr , Z cr ) T , and the coordinates under the world coordinate system are P=(Xw, Y w , Z w ) T ; then there are:
Figure FDA0003431101890000011
Figure FDA0003431101890000011
可以得到can get
Figure FDA0003431101890000021
Figure FDA0003431101890000021
其中,
Figure FDA0003431101890000022
in,
Figure FDA0003431101890000022
通过上式,可以得到左、右相机坐标系间的转换关系;Through the above formula, the conversion relationship between the left and right camera coordinate systems can be obtained; 两相机的光轴平行,光轴与图像平面垂直,两相机间距为d;Cl、Cr为左、右相机坐标系,设三维点P在左相机坐标系下为(Xcl,Ycl,Zcl),在右相机坐标系下为(Xcr,Ycr,Zcr),像素坐标系与图像坐标系间关系为:The optical axes of the two cameras are parallel, the optical axes are perpendicular to the image plane, and the distance between the two cameras is d; C l and C r are the left and right camera coordinate systems, and the three-dimensional point P in the left camera coordinate system is (X cl , Y cl ) , Z cl ), in the right camera coordinate system (X cr , Y cr , Z cr ), the relationship between the pixel coordinate system and the image coordinate system is:
Figure FDA0003431101890000023
Figure FDA0003431101890000023
其中u0、v0为相机内参,(ul,vl)、(ur,vr)为P在左、右像素坐标系中的坐标,(xl,yl)、(xr,yr)为P在图像坐标系中的坐标;Where u0, v0 are camera internal parameters, (u l , v l ), (u r , v r ) are the coordinates of P in the left and right pixel coordinate systems, (x l , y l ), (x r , y r ) is the coordinate of P in the image coordinate system; 结合图像坐标系与相机坐标系的关系,可得:Combining the relationship between the image coordinate system and the camera coordinate system, we can get:
Figure FDA0003431101890000024
Figure FDA0003431101890000024
其中,f为相机焦距,α=f/dx,β=f/dy是相机内参,通过上式便可以实现图像坐标系与世界坐标系的坐标转换。Among them, f is the focal length of the camera, α=f/dx, and β=f/dy are the internal parameters of the camera, and the coordinate conversion between the image coordinate system and the world coordinate system can be realized by the above formula.
3.根据权利要求1所述的基于线结构光的工件自动寻位找正方法,其特征在于:所述S4中,获取光条后,采用灰度重心法进行光条中心提取,灰度重心法的计算公式为:3. The workpiece automatic positioning method based on line structured light according to claim 1, characterized in that: in the S4, after obtaining the light bar, the gray barycenter method is used to extract the center of the light bar, and the gray barycenter The calculation formula of the method is:
Figure FDA0003431101890000031
Figure FDA0003431101890000031
其中(Xi,Yi)为光条平面中某一像素点的坐标,相应像素点的灰度值为I(Xi,Yi),(i=1,2,…,N),N为截面内像素点的个数,(Xc,Yc)为所求光条中心坐标。至此,可以求出光条中心点坐标。Where (X i ,Y i ) is the coordinate of a pixel in the light bar plane, and the gray value of the corresponding pixel is I(X i ,Y i ),(i=1,2,...,N), N is the number of pixel points in the section, and (X c , Y c ) is the center coordinate of the desired light bar. So far, the coordinates of the center point of the light bar can be obtained.
4.根据权利要求1所述的基于线结构光的工件自动寻位找正方法,其特征在于:所述S6中,将其投射于Y=0的平面,设该投影上任意两点Ai,Bj的坐标分别为Ai(XAi,ZAi),Bj(XBj,ZBj);过A做线段AiCi,j平行于Xw轴,过Bj做线段BjCi,j平行于Zw轴,AiCi,j与BiCi,j相交于Ci,j,则Ci,j坐标为Ci,j(XBj,ZAi),并有AiCi,j与AiBj的夹角αi,j;则有:4. The workpiece automatic positioning method based on line structured light according to claim 1, characterized in that: in the S6, it is projected on the plane of Y=0, and any two points A i are set on the projection. , the coordinates of B j are respectively A i (X Ai , Z Ai ), Bj (X Bj , Z Bj ); the line segment A i C i, j is made parallel to the X w axis through A, and the line segment B j C is made through B j i,j is parallel to the Z w axis, A i C i,j and B i C i,j intersect at C i,j , then the coordinates of C i, j are C i,j (X Bj ,Z Ai ), and have The included angle α i,j of A i C i,j and A i B j ; then there are:
Figure FDA0003431101890000032
Figure FDA0003431101890000032
可以得到:You can get:
Figure FDA0003431101890000033
Figure FDA0003431101890000033
最终所需旋转角度为:The final required rotation angle is:
Figure FDA0003431101890000034
Figure FDA0003431101890000034
其中,N为所获得的坐标点数;Among them, N is the number of coordinate points obtained; 设所测得的坐标点为(xi,yi),正确位置相应坐标点为(Xi,Yi),则位置偏差的平移量为:Assuming that the measured coordinate point is (xi,yi), and the corresponding coordinate point of the correct position is (Xi,Yi), the translation amount of the position deviation is:
Figure FDA0003431101890000041
Figure FDA0003431101890000041
CN202111595872.3A 2021-12-24 2021-12-24 A method of automatic workpiece positioning and alignment based on line structured light Pending CN114387228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111595872.3A CN114387228A (en) 2021-12-24 2021-12-24 A method of automatic workpiece positioning and alignment based on line structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111595872.3A CN114387228A (en) 2021-12-24 2021-12-24 A method of automatic workpiece positioning and alignment based on line structured light

Publications (1)

Publication Number Publication Date
CN114387228A true CN114387228A (en) 2022-04-22

Family

ID=81197560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111595872.3A Pending CN114387228A (en) 2021-12-24 2021-12-24 A method of automatic workpiece positioning and alignment based on line structured light

Country Status (1)

Country Link
CN (1) CN114387228A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247053A (en) * 2013-05-16 2013-08-14 大连理工大学 Accurate part positioning method based on binocular microscopy stereo vision
CN110910506A (en) * 2019-12-03 2020-03-24 江苏集萃华科智能装备科技有限公司 Three-dimensional reconstruction method and device based on normal detection, detection device and system
CN113421291A (en) * 2021-07-16 2021-09-21 北京华睿盛德科技有限公司 Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology
WO2021238923A1 (en) * 2020-05-25 2021-12-02 追觅创新科技(苏州)有限公司 Camera parameter calibration method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247053A (en) * 2013-05-16 2013-08-14 大连理工大学 Accurate part positioning method based on binocular microscopy stereo vision
CN110910506A (en) * 2019-12-03 2020-03-24 江苏集萃华科智能装备科技有限公司 Three-dimensional reconstruction method and device based on normal detection, detection device and system
WO2021238923A1 (en) * 2020-05-25 2021-12-02 追觅创新科技(苏州)有限公司 Camera parameter calibration method and device
CN113421291A (en) * 2021-07-16 2021-09-21 北京华睿盛德科技有限公司 Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋超博: "大尺寸零件在线视觉测量关键技术研究", CNKI硕士电子期刊, no. 07, 15 July 2020 (2020-07-15), pages 7 - 9 *

Similar Documents

Publication Publication Date Title
CN109598762B (en) High-precision binocular camera calibration method
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN111089569B (en) Large box body measuring method based on monocular vision
CN109029299B (en) Dual-camera measurement device and measurement method for docking angle of pin hole in cabin
CN111369630A (en) A method of multi-line lidar and camera calibration
CN110497187A (en) Sunflower module assembly system based on vision guidance
CN112614098B (en) Blank positioning and machining allowance analysis method based on augmented reality
CN112381827B (en) Rapid high-precision defect detection method based on visual image
CN101231750A (en) A Calibration Method for Binocular Stereo Measuring System
CN106485757A (en) A kind of Camera Calibration of Stereo Vision System platform based on filled circles scaling board and scaling method
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN113119129A (en) Monocular distance measurement positioning method based on standard ball
CN112258455A (en) Detection method for detecting spatial position of part based on monocular vision
Wang et al. Error analysis and improved calibration algorithm for LED chip localization system based on visual feedback
CN114820817A (en) Calibration method and three-dimensional reconstruction method based on high-precision line laser 3D camera
CN102663727A (en) Method for calibrating parameters by dividing regions in a camera based on CMM moving target
CN112365502B (en) Calibration method based on visual image defect detection
CN114001651A (en) Large-scale long and thin cylinder type component pose in-situ measurement method based on binocular vision measurement and prior detection data
CN106964907A (en) A kind of method and apparatus of laser cutting
CN111968182B (en) Calibration method for nonlinear model parameters of binocular camera
CN112729112A (en) Engine cylinder bore diameter and hole site detection method based on robot vision
CN115830089A (en) Point cloud registration method combining key point information and application thereof
CN112489141B (en) Production line calibration method and device for single-board single-image strip relay lens of vehicle-mounted camera
CN114387228A (en) A method of automatic workpiece positioning and alignment based on line structured light
CN114998422B (en) High-precision rapid three-dimensional positioning system based on error compensation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination