CN110455187B - A three-dimensional vision-based detection method for box workpiece welds - Google Patents
A three-dimensional vision-based detection method for box workpiece welds Download PDFInfo
- Publication number
- CN110455187B CN110455187B CN201910774689.6A CN201910774689A CN110455187B CN 110455187 B CN110455187 B CN 110455187B CN 201910774689 A CN201910774689 A CN 201910774689A CN 110455187 B CN110455187 B CN 110455187B
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- workpiece
- box
- box body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title abstract description 15
- 238000003466 welding Methods 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 50
- 230000009466 transformation Effects 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims 5
- 238000007689 inspection Methods 0.000 abstract description 7
- 238000005259 measurement Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K37/00—Auxiliary devices or processes, not specially adapted for a procedure covered by only one of the other main groups of this subclass
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Optics & Photonics (AREA)
- Mechanical Engineering (AREA)
- Laser Beam Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
一种基于三维视觉的盒体工件焊缝的检测方法,它属于焊接自动化中的焊缝检测技术领域。本发明采用精度较低但测量范围较大的Kinect2设备对焊接空间进行扫描,确定出盒体工件顶点的粗略位置后,将其依次作为焊缝轨迹的起点与终点,再采用精度高的线激光扫描仪从这些定位点出发,依次进行扫描,得到精确的焊缝点云信息。本发明方法与单纯采用线激光扫描仪进行焊缝的三维检测相比,在保证精确检测的同时极大地提高了检测速度。本发明可以应用于盒体工件的焊缝检测。
The invention relates to a three-dimensional vision-based detection method for the welding seam of a box body workpiece, which belongs to the technical field of welding seam detection in welding automation. The invention uses the Kinect2 equipment with low precision but large measurement range to scan the welding space, and after determining the rough position of the apex of the box body workpiece, it is used as the starting point and the end point of the welding seam track in turn, and then the line laser with high precision is used. The scanner starts from these positioning points and scans in sequence to obtain accurate weld point cloud information. Compared with the three-dimensional inspection of the welding seam simply using the line laser scanner, the method of the invention greatly improves the inspection speed while ensuring the accurate inspection. The invention can be applied to the welding seam detection of the box body workpiece.
Description
技术领域technical field
本发明属于焊接自动化中的焊缝检测技术领域,具体涉及一种盒体工件焊缝的检测方法。The invention belongs to the technical field of welding seam detection in welding automation, and in particular relates to a method for detecting the welding seam of a box body workpiece.
背景技术Background technique
随着先进制造技术的飞速发展,机器人自动化焊接技术正逐步取代人工焊接,成为目前焊接领域的主要发展方向。焊缝检测技术作为自动化焊接中的一项关键技术,其检测的精确度、效率及可靠性直接影响了后续的焊接质量。With the rapid development of advanced manufacturing technology, robotic automatic welding technology is gradually replacing manual welding and has become the main development direction in the current welding field. As a key technology in automatic welding, the welding seam detection technology, its detection accuracy, efficiency and reliability directly affect the subsequent welding quality.
盒体工件作为一种典型的焊接结构,在航空航天、工业及船舶制造等领域有着广泛的应用,因此实现盒体工件焊缝的自动化焊接具有很高的实际应用价值。As a typical welding structure, the box workpiece has a wide range of applications in aerospace, industry and shipbuilding. Therefore, the automatic welding of the box workpiece weld has high practical application value.
采用线激光扫描仪对盒体工件进行焊缝检测,检测精度高,可靠性好,适合于复杂的工业焊接环境,但其测量范围小,产生的点云稠密。在工件摆放位姿未知的情况下,由于线激光扫描仪视野范围小,用其扫描工件来识别焊缝会消耗大量的时间。同时在扫描的过程中还会产生大量的点云数据,加大了数据处理的难度,导致焊缝检测的效率较低。The line laser scanner is used to detect the weld seam of the box workpiece, which has high detection accuracy and good reliability, and is suitable for complex industrial welding environments, but its measurement range is small and the point cloud generated is dense. When the position and orientation of the workpiece is unknown, due to the small field of view of the line laser scanner, it will consume a lot of time to scan the workpiece to identify the weld. At the same time, a large amount of point cloud data will be generated during the scanning process, which increases the difficulty of data processing and leads to low efficiency of weld inspection.
发明内容SUMMARY OF THE INVENTION
本发明的目的是为解决现有线激光扫描仪对盒体工件进行焊缝检测的效率低的问题,而提出了一种基于Kinect2与线激光扫描仪相结合的盒体工件焊缝的检测方法。The purpose of the present invention is to solve the problem of low efficiency of the existing line laser scanner for the detection of the welding seam of the box body workpiece, and proposes a detection method of the box body workpiece welding seam based on the combination of Kinect2 and the line laser scanner.
本发明为解决上述技术问题采取的技术方案是:一种基于三维视觉的盒体工件焊缝的检测方法,该方法包括以下步骤:The technical scheme adopted by the present invention to solve the above-mentioned technical problems is: a three-dimensional vision-based detection method for the welding seam of a box body workpiece, the method comprising the following steps:
步骤一、将待检测的盒体工件置于平面工作台上,并采集一帧盒体工件和平面工作台空间的三维点云数据;Step 1. Place the box workpiece to be detected on the plane workbench, and collect a frame of three-dimensional point cloud data of the box body workpiece and the plane workbench space;
步骤二、采用随机采样一致性方法,从步骤一采集的三维点云数据中分割出平面工作台的点云数据,获得剩余的点云数据以及工作台所在平面的平面方程;Step 2: Using the random sampling consistency method, segment the point cloud data of the plane workbench from the three-dimensional point cloud data collected in the first step, and obtain the remaining point cloud data and the plane equation of the plane where the workbench is located;
并对剩余的点云数据进行聚类处理,分离出盒体工件的点云数据;The remaining point cloud data is clustered to separate the point cloud data of the box workpiece;
步骤三、对步骤二获得的盒体工件点云数据进行预处理,获得去除离群点后的盒体工件点云;Step 3: Preprocess the point cloud data of the box body workpiece obtained in step 2 to obtain the box body workpiece point cloud after removing outliers;
步骤四、通过旋转变换将去除离群点后的盒体工件点云与相机坐标系进行轴向对齐,获得轴向对齐后的盒体工件点云;Step 4: Axially align the point cloud of the box body workpiece after removing the outliers with the camera coordinate system through the rotation transformation, and obtain the point cloud of the box body workpiece after the axial alignment;
步骤五、求取步骤四获得的轴向对齐后的盒体工件点云的轴向包围盒,将轴向包围盒的四个上顶点记为a、b、c和d;Step 5: Obtain the axial bounding box of the axially aligned box workpiece point cloud obtained in Step 4, and denote the four upper vertices of the axial bounding box as a, b, c and d;
步骤六、在轴向包围盒所包围的点云中,分别寻找出与顶点a、b、c、d距离最近的点,其中:与点a距离最近的点记为a1,与点b距离最近的点记为b1,与点c距离最近的点记为c1,与点d距离最近的点记为d1;Step 6. In the point cloud surrounded by the axial bounding box, find the points closest to the vertices a, b, c, and d respectively, among which: the point closest to the point a is recorded as a 1 , and the distance to the point b The closest point is denoted as b 1 , the closest point to point c is denoted as c 1 , and the closest point to point d is denoted as d 1 ;
步骤七、通过旋转变换将点a1、b1、c1和d1转换回步骤三获得的去除离群点后的盒体工件点云,得到盒体工件的四个上顶点,将盒体工件的四个上顶点分别记为a2、b2、c2和d2;Step 7. Convert the points a 1 , b 1 , c 1 and d 1 back to the point cloud of the box workpiece obtained in step 3 after removing the outliers through rotation transformation, and obtain the four upper vertices of the box workpiece, and convert the box body The four upper vertices of the workpiece are respectively denoted as a 2 , b 2 , c 2 and d 2 ;
步骤八、分别将点a2、b2、c2和d2向工作台所在的平面进行投影,获得盒体工件的四个下顶点;Step 8: Project the points a 2 , b 2 , c 2 and d 2 to the plane where the workbench is located to obtain the four lower vertices of the box workpiece;
步骤九、将步骤七得到的四个上顶点和步骤八得到的四个下顶点作为盒体工件顶点的粗略位置,并将获得的盒体工件顶点的粗略位置作为焊缝轨迹的起点与终点,利用线激光扫描仪从焊缝轨迹的起点出发,依次进行扫描,从而得到精确的焊缝点云信息。Step 9. Use the four upper vertices obtained in step 7 and the four lower vertices obtained in step 8 as the rough positions of the vertices of the box body workpiece, and use the obtained rough positions of the box body workpiece vertices as the starting point and end point of the weld track, Starting from the starting point of the welding seam trajectory, the line laser scanner is used to scan sequentially, so as to obtain accurate welding seam point cloud information.
本发明的有益效果是:本发明提出了一种基于三维视觉的盒体工件焊缝的检测方法,本发明采用精度较低但测量范围较大的Kinect2设备对焊接空间进行扫描,确定出盒体工件顶点的粗略位置后,将其依次作为焊缝轨迹的起点与终点,再采用精度高的线激光扫描仪从这些定位点出发,依次进行扫描,得到精确的焊缝点云信息。本发明方法与单纯采用线激光扫描仪进行焊缝的三维检测相比,在保证精确检测的同时极大地提高了检测速度,满足自动化焊接对效率、精确度及可靠性的要求,为后续的焊接工作提供了基础。The beneficial effects of the present invention are as follows: the present invention proposes a three-dimensional vision-based detection method for the welding seam of a box body workpiece. The present invention uses a Kinect2 device with lower precision but a larger measurement range to scan the welding space to determine the box body. After the rough position of the apex of the workpiece, it is used as the starting point and the end point of the welding seam trajectory in turn, and then a high-precision line laser scanner is used to scan from these positioning points in turn to obtain accurate welding seam point cloud information. Compared with the three-dimensional inspection of the welding seam by simply using the line laser scanner, the method of the invention greatly improves the inspection speed while ensuring the accurate inspection, meets the requirements of automatic welding on efficiency, accuracy and reliability, and provides a good solution for the subsequent welding. Work provides the foundation.
附图说明Description of drawings
图1是本发明的一种基于三维视觉的盒体工件焊缝的检测方法的流程图;1 is a flow chart of a method for detecting a welding seam of a box body based on a three-dimensional vision of the present invention;
图2是实施例中Kinect2设备采集的一帧焊接空间点云的效果图;Fig. 2 is the effect diagram of one frame of welding space point cloud collected by Kinect2 equipment in the embodiment;
图3是实施例中分割出工作台平面的点云后,对剩余点云进行欧式聚类的效果图;3 is an effect diagram of performing Euclidean clustering on the remaining point clouds after dividing the point cloud of the workbench plane in the embodiment;
图4是实施例中求取盒体工件点云的轴向包围盒,并分别寻找出与点a、b、c、d距离最近的点的效果图;(为展示顶点进行了局部放大)Figure 4 is the effect diagram of obtaining the axial bounding box of the point cloud of the box body workpiece in the embodiment, and finding the points closest to the points a, b, c, and d respectively;
图5是实施例中盒体工件的八个顶点a2、b2、c2、d2以及a3、b3、c3、d3的效果图。(为展示顶点进行了局部放大)5 is an effect diagram of eight vertices a 2 , b 2 , c 2 , d 2 and a 3 , b 3 , c 3 , and d 3 of the box workpiece in the embodiment. (Zoomed in to show vertices)
具体实施方式Detailed ways
具体实施方式一:结合图1说明本实施方式,本实施方式所述的一种基于三维视觉的盒体工件焊缝的检测方法,该方法包括以下步骤:Embodiment 1: This embodiment will be described with reference to FIG. 1 , and a three-dimensional vision-based detection method for the welding seam of a box body workpiece described in this embodiment includes the following steps:
步骤一、将待检测的盒体工件置于平面工作台上,并采集一帧盒体工件和平面工作台空间的三维点云数据;Step 1. Place the box workpiece to be detected on the plane workbench, and collect a frame of three-dimensional point cloud data of the box body workpiece and the plane workbench space;
步骤二、采用随机采样一致性方法,从步骤一采集的三维点云数据中分割出平面工作台的点云数据,获得剩余的点云数据以及工作台所在平面的平面方程;Step 2: Using the random sampling consistency method, segment the point cloud data of the plane workbench from the three-dimensional point cloud data collected in the first step, and obtain the remaining point cloud data and the plane equation of the plane where the workbench is located;
并对剩余的点云数据进行聚类处理,分离出盒体工件的点云数据;The remaining point cloud data is clustered to separate the point cloud data of the box workpiece;
步骤三、采用统计滤波算法,对步骤二获得的盒体工件点云数据进行预处理,获得去除离群点后的盒体工件点云;Step 3: Preprocessing the box workpiece point cloud data obtained in step 2 by using a statistical filtering algorithm to obtain the box workpiece point cloud after removing outliers;
步骤四、通过旋转变换将去除离群点后的盒体工件点云与相机坐标系进行轴向对齐,获得轴向对齐后的盒体工件点云;Step 4: Axially align the point cloud of the box body workpiece after removing the outliers with the camera coordinate system through the rotation transformation, and obtain the point cloud of the box body workpiece after the axial alignment;
步骤五、求取步骤四获得的轴向对齐后的盒体工件点云的轴向包围盒(AABB包围盒),将轴向包围盒的四个上顶点记为a、b、c和d;Step 5: Obtain the axial bounding box (AABB bounding box) of the axially aligned box workpiece point cloud obtained in step 4, and denote the four upper vertices of the axial bounding box as a, b, c and d;
步骤六、在轴向包围盒所包围的点云中,分别寻找出与顶点a、b、c、d距离最近的点,其中:与点a距离最近的点记为a1,与点b距离最近的点记为b1,与点c距离最近的点记为c1,与点d距离最近的点记为d1;Step 6. In the point cloud surrounded by the axial bounding box, find the points closest to the vertices a, b, c, and d respectively, among which: the point closest to the point a is recorded as a 1 , and the distance to the point b The closest point is denoted as b 1 , the closest point to point c is denoted as c 1 , and the closest point to point d is denoted as d 1 ;
步骤七、通过旋转变换将点a1、b1、c1和d1转换回步骤三获得的去除离群点后的盒体工件点云,得到盒体工件的四个上顶点,将盒体工件的四个上顶点分别记为a2、b2、c2和d2;Step 7. Convert the points a 1 , b 1 , c 1 and d 1 back to the point cloud of the box workpiece obtained in step 3 after removing the outliers through rotation transformation, and obtain the four upper vertices of the box workpiece, and convert the box body The four upper vertices of the workpiece are respectively denoted as a 2 , b 2 , c 2 and d 2 ;
步骤八、分别将点a2、b2、c2和d2向工作台所在的平面进行投影,获得盒体工件的四个下顶点;Step 8: Project the points a 2 , b 2 , c 2 and d 2 to the plane where the workbench is located to obtain the four lower vertices of the box workpiece;
步骤九、将步骤七得到的四个上顶点和步骤八得到的四个下顶点作为盒体工件顶点的粗略位置,并将获得的盒体工件顶点的粗略位置作为焊缝轨迹的起点与终点,利用线激光扫描仪从焊缝轨迹的起点出发,依次进行扫描,从而得到精确的焊缝点云信息。Step 9. Use the four upper vertices obtained in step 7 and the four lower vertices obtained in step 8 as the rough positions of the vertices of the box body workpiece, and use the obtained rough positions of the box body workpiece vertices as the starting point and end point of the weld track, Starting from the starting point of the welding seam trajectory, the line laser scanner is used to scan sequentially, so as to obtain accurate welding seam point cloud information.
本实施方式采用Kinect2设备对焊接空间进行扫描,确定盒体工件顶点的粗略位置,并将其依次作为焊缝轨迹的起点与终点,再利用线激光扫描仪从这些定位点出发,依次进行扫描,从而得到精确的焊缝点云信息,同时提高了焊缝检测的效率,为后续的焊接工作提供了基础。In this embodiment, the Kinect2 device is used to scan the welding space to determine the rough position of the apex of the box body workpiece, which is used as the starting point and the end point of the welding seam trajectory in turn, and then the line laser scanner is used to start from these positioning points, and scan sequentially. In this way, accurate welding seam point cloud information is obtained, and the efficiency of welding seam detection is improved, which provides a basis for subsequent welding work.
具体实施方式二:本实施方式与具体实施方式一不同的是:所述步骤一的具体过程为:Embodiment 2: The difference between this embodiment and Embodiment 1 is that the specific process of step 1 is:
将待检测的盒体工件置于平面工作台上,并利用Kinect2设备采集一帧盒体工件和平面工作台空间的三维点云数据;所述三维点云数据是表示在Kinect2设备的相机坐标系下的,所述相机坐标系以Kinect2设备的深度相机中心作为坐标原点,相机坐标系的X轴正方向为Kinect2照射方向的正左方向,Y轴正方向为Kinect2照射方向的正上方向,Z轴正方向为Kinect2照射方向,且X轴、Y轴和Z轴构成右手坐标系。Place the box workpiece to be detected on the plane workbench, and use the Kinect2 device to collect a frame of three-dimensional point cloud data of the box body workpiece and the plane workbench space; the three-dimensional point cloud data is expressed in the camera coordinate system of the Kinect2 device. Below, the camera coordinate system takes the depth camera center of the Kinect2 device as the coordinate origin, the positive X-axis direction of the camera coordinate system is the positive left direction of the Kinect2 irradiation direction, the Y-axis positive direction is the positive direction of the Kinect2 irradiation direction, Z The positive direction of the axis is the irradiation direction of the Kinect2, and the X-axis, Y-axis and Z-axis constitute a right-hand coordinate system.
Kinect2设备固定于平面工作台的斜上方,以俯视的方式采集一帧焊接空间的三维点云数据。The Kinect2 equipment is fixed on the oblique upper part of the flat workbench, and collects a frame of 3D point cloud data in the welding space by looking down.
其他步骤及参数与具体实施方式一相同。Other steps and parameters are the same as in the first embodiment.
具体实施方式三:本实施方式与具体实施方式二不同的是:所述步骤二的具体过程为:Embodiment 3: The difference between this embodiment and Embodiment 2 is that the specific process of the second step is:
步骤二一、设置距离阈值dth和最大迭代次数;Step 21. Set the distance threshold d th and the maximum number of iterations;
步骤二二、设工作台所在平面在相机坐标系下的平面方程为:Ax+By+Cz+D=0,其中:A、B、C和D均为平面方程系数;从步骤一采集的点云数据中随机抽取出不共线的三个点,利用抽取出的不共线的三个点来估计平面方程系数;Step 22. Set the plane equation of the plane where the worktable is located in the camera coordinate system as: Ax+By+Cz+D=0, where: A, B, C and D are all plane equation coefficients; the points collected from step 1 Three non-collinear points are randomly extracted from the cloud data, and the coefficients of the plane equation are estimated by using the extracted three non-collinear points;
步骤二三、抽取出不共线的三个点后,再分别计算剩余的各个点到步骤二二的平面方程的距离di′,其中:di′代表剩余的第i′个点到步骤二二的平面方程的距离;Step 23: After extracting the three points that are not collinear, calculate the distance d i′ from each remaining point to the plane equation of step 2 and 2, where: d i′ represents the remaining i′th point to the step The distance of the plane equation of two two;
若di′<dth,则剩余的第i′个点是步骤二二的平面方程内的点,直至剩余的每个点到步骤二二的平面方程的距离计算完成后,统计出步骤二二的平面方程内点的总个数;If d i' < d th , the remaining i'th point is the point in the plane equation of step 22, until the calculation of the distance between each remaining point and the plane equation of step 22 is completed, the second step is counted. The total number of points in the plane equation of two;
步骤二四、重复步骤二二和步骤二三的过程,直至达到设置的最大迭代次数,将各次得到的平面方程内点的总个数进行降序排列,选取最大的总个数对应的平面方程系数作为最优估计,根据最优估计获得工作台所在平面的平面方程;Step 24: Repeat the process of steps 22 and 23 until the maximum number of iterations set is reached, sort the total number of points in the plane equation obtained each time in descending order, and select the plane equation corresponding to the largest total number The coefficient is used as the optimal estimate, and the plane equation of the plane where the workbench is located is obtained according to the optimal estimate;
并将最大的总个数对应的平面方程内点从步骤一采集的点云中删去,得到剩余的点云数据;Delete the points in the plane equation corresponding to the maximum total number from the point cloud collected in step 1 to obtain the remaining point cloud data;
步骤二五、对步骤二四获得的剩余点云数据进行欧式聚类(检查剩余点云中任意两点之间的欧式距离,如果两点之间的欧氏距离小于给定阈值,则认为这两个点属于同一簇,否则这两个点不属于同一簇),分离出盒体工件点云。Step 25: Perform Euclidean clustering on the remaining point cloud data obtained in step 24 (check the Euclidean distance between any two points in the remaining point cloud, if the Euclidean distance between the two points is less than the given threshold, it is considered that this The two points belong to the same cluster, otherwise the two points do not belong to the same cluster), and separate the box workpiece point cloud.
其他步骤及参数与具体实施方式二相同。Other steps and parameters are the same as in the second embodiment.
具体实施方式四:本实施方式与具体实施方式三不同的是:所述步骤三的具体过程为:Embodiment 4: The difference between this embodiment and Embodiment 3 is that the specific process of the third step is:
步骤三一、对于盒体工件点云中的某个点(xj,yj,zj),从盒体工件的点云中寻找出该点的k个近邻点(xi,yi,zi),其中:i=1,2,…,k;寻找的过程中采用KD树来提高搜索效率,并计算寻找出的k个近邻点到点(xj,yj,zj)距离的算术平均值即点(xj,yj,zj)的邻域平均距离;Step 31. For a certain point (x j , y j , z j ) in the point cloud of the box workpiece, find the k nearest neighbors (x i , y i , z j ) of the point from the point cloud of the box workpiece. z i ), where: i=1,2,...,k; KD tree is used to improve the search efficiency in the process of searching, and the distance from the k nearest neighbors to the point (x j , y j , z j ) is calculated the arithmetic mean of That is, the average distance of the neighborhood of the point (x j , y j , z j );
步骤三二、采用步骤三一中的方法,计算盒体工件点云中每个点的邻域平均距离,再计算盒体工件点云中每个点的邻域平均距离的平均值μ以及盒体工件点云中每个点的邻域平均距离的标准差σ,其中n代表盒体工件点云中点的总个数;Step 32: Using the method in Step 31, calculate the average distance of the neighborhood of each point in the point cloud of the box workpiece, and then calculate the average value μ of the neighborhood average distance of each point in the point cloud of the box workpiece and the box The standard deviation σ of the average distance of the neighborhood of each point in the point cloud of the body workpiece, where n represents the total number of points in the point cloud of the box body workpiece;
步骤三三、设定标准距离的置信区间R=[μ-p×σ,μ+p×σ],其中p为标准差权重,则邻域平均距离在置信区间之外的点被认为是离群点,将离群点从盒体工件的点云数据中滤除,获得去除离群点后的盒体工件点云。Step 33. Set the confidence interval of the standard distance R=[μ-p×σ, μ+p×σ], where p is the standard deviation weight, then the average distance of the neighborhood The points outside the confidence interval are considered as outliers, and the outliers are filtered out from the point cloud data of the box workpiece to obtain the point cloud of the box workpiece after removing the outliers.
点(xj,yj,zj)的k个近邻点(xi,yi,zi)是指:从盒体工件的点云中,寻找出与点(xj,yj,zj)距离最近的k个点。The k nearest neighbors (x i , y i , z i ) of the point (x j , y j , z j ) refer to: from the point cloud of the box workpiece, find the point (x j , y j , z ) j ) The k nearest points.
其他步骤及参数与具体实施方式三相同。Other steps and parameters are the same as in the third embodiment.
具体实施方式五:本实施方式与具体实施方式四不同的是:所述步骤四的具体过程为:Embodiment 5: The difference between this embodiment and Embodiment 4 is that the specific process of the fourth step is:
步骤四一、将去除离群点后的盒体工件点云与相机坐标系的Z轴对齐:Step 41. Align the point cloud of the box workpiece after removing the outliers with the Z axis of the camera coordinate system:
工作台所在平面的平面方程的法向量为:相机坐标系Z轴的方向向量为:则由法向量旋转至方向向量的转轴与转角θ分别为:The normal vector of the plane equation of the plane on which the table is located for: The direction vector of the Z axis of the camera coordinate system for: then by the normal vector rotate to direction vector the reel and the rotation angle θ are:
其中:nx代表转轴沿相机坐标系的X轴方向分量,ny代表转轴沿相机坐标系的Y轴方向分量,nz代表转轴沿相机坐标系的Z轴方向分量;Among them: n x represents the axis of rotation Component along the X axis of the camera coordinate system, ny represents the rotation axis Component along the Y axis of the camera coordinate system, n z represents the rotation axis Component along the Z axis of the camera coordinate system;
采用罗德里格斯旋转公式,求得去除离群点后的盒体工件点云与相机坐标系Z轴对齐的旋转变换矩阵为:Using the Rodrigues rotation formula, the rotation transformation matrix that aligns the point cloud of the box workpiece with the Z axis of the camera coordinate system after removing outliers is obtained for:
采用旋转变换矩阵将去除离群点后的盒体工件点云S与相机坐标系的Z轴对齐,得到第一次变换后的点云S′:Use a rotation transformation matrix Align the point cloud S of the box workpiece after removing the outliers with the Z axis of the camera coordinate system to obtain the point cloud S' after the first transformation:
步骤四二、将第一次变换后的点云S′与相机坐标系的X轴和Y轴对齐:Step 42: Align the first transformed point cloud S' with the X and Y axes of the camera coordinate system:
采用随机采样一致性方法(同步骤二一至步骤二四的过程)求取第一次变换后的点云S′的侧平面方程:A1x+B1y+C1z+D1=0,其中A1、B1、C1和D1均为侧平面方程系数,则侧平面方程的法向量为:则第一次变换后的点云S′与相机坐标系的X轴和Y轴对齐时,需要的转角为:其中:atan2(A1,B1)代表以相机坐标系的坐标原点O为起点,在XOY坐标平面上的指向为(B1,A1)的射线与X轴正方向之间的夹角;Use the random sampling consistency method (the same process as step 21 to step 24) to obtain the side plane equation of the point cloud S' after the first transformation: A 1 x+B 1 y+C 1 z+D 1 = 0, where A 1 , B 1 , C 1 and D 1 are the coefficients of the side plane equation, then the normal vector of the side plane equation for: Then when the first transformed point cloud S' is aligned with the X-axis and Y-axis of the camera coordinate system, the required rotation angle is for: Among them: atan2(A 1 , B 1 ) represents the angle between the ray pointing at (B 1 , A 1 ) and the positive direction of the X-axis on the XOY coordinate plane with the coordinate origin O of the camera coordinate system as the starting point;
则第一次变换后的点云S′与相机坐标系的X轴和Y轴对齐的旋转变换矩阵为:Then the point cloud S' after the first transformation is aligned with the rotation transformation matrix of the X and Y axes of the camera coordinate system for:
采用旋转变换矩阵将第一次变换后的点云S′与相机坐标系的X轴和Y轴对齐,得到第二次变换后的点云S″,将第二次变换后的点云S″作为轴向对齐后的盒体工件点云;Use a rotation transformation matrix Align the first transformed point cloud S' with the X and Y axes of the camera coordinate system to obtain the second transformed point cloud S", and use the second transformed point cloud S" as the axial alignment After the box workpiece point cloud;
其他步骤及参数与具体实施方式四相同。Other steps and parameters are the same as in the fourth embodiment.
具体实施方式六:本实施方式与具体实施方式五不同的是:所述步骤五的具体过程为:Embodiment 6: The difference between this embodiment and Embodiment 5 is that the specific process of the fifth step is:
步骤五一、遍历轴向对齐后的盒体工件点云,分别获得轴向对齐后的盒体工件点云在相机坐标系的X、Y、Z轴方向上的最大坐标值和最小坐标值,其中:X轴方向上的最大坐标值和最小坐标值分别记为xmax和xmin,Y轴方向上的最大坐标值和最小坐标值分别记为ymax和ymin,Z轴方向上的最大坐标值和最小坐标值分别记为zmax和zmin;Step 51: Traverse the axially aligned box workpiece point cloud, and obtain the maximum and minimum coordinate values of the axially aligned box workpiece point cloud in the X, Y, and Z axis directions of the camera coordinate system, respectively. Among them: the maximum coordinate value and the minimum coordinate value in the X-axis direction are respectively recorded as x max and x min , the maximum and minimum coordinate values in the Y-axis direction are respectively recorded as y max and y min , and the maximum coordinate value in the Z-axis direction is recorded as y max and y min respectively. The coordinate value and the minimum coordinate value are recorded as z max and z min respectively;
步骤五二、将步骤五一的X、Y、Z轴方向上的最大坐标值与最小坐标值进行组合,将8组不同的组合作为轴向对齐后的盒体工件点云的轴向包围盒的八个顶点,从而根据八个顶点构建出盒体工件点云的轴向包围盒;其中轴向包围盒的四个上顶点的坐标分别为:a(xmax,ymin,zmax)、b(xmax,ymax,zmax)、c(xmin,ymax,zmax)和d(xmin,ymin,zmax)。Step 52: Combine the maximum coordinate value and the minimum coordinate value in the X, Y, and Z axis directions of step 51, and use 8 different combinations as the axial bounding box of the axially aligned box workpiece point cloud. The eight vertices of , so that the axial bounding box of the point cloud of the box workpiece is constructed according to the eight vertices; the coordinates of the four upper vertices of the axial bounding box are: a(x max , y min , z max ), b(x max , y max , z max ), c(x min , y max , z max ) and d(x min , y min , z max ).
其他步骤及参数与具体实施方式五相同。Other steps and parameters are the same as in the fifth embodiment.
具体实施方式七:本实施方式与具体实施方式六不同的是:所述步骤六的在轴向包围盒所包围的点云中,分别寻找出与顶点a、b、c、d距离最近的点,是通过采用KD树的方式来实现的。Embodiment 7: The difference between this embodiment and Embodiment 6 is that in the step 6, in the point cloud surrounded by the axial bounding box, the points closest to the vertices a, b, c, and d are respectively found. , which is achieved by using a KD tree.
其他步骤及参数与具体实施方式六相同。Other steps and parameters are the same as in the sixth embodiment.
本发明在进行欧式聚类、统计滤波及寻找最近点的过程中,均采用KD树结构来提高搜寻的效率,加快了程序运行的速度。In the process of performing Euclidean clustering, statistical filtering and searching for the nearest point, the present invention adopts the KD tree structure to improve the searching efficiency and speed up the running speed of the program.
具体实施方式八:本实施方式与具体实施方式七不同的是:所述步骤七的具体过程为:Embodiment 8: The difference between this embodiment and Embodiment 7 is that the specific process of the seventh step is:
根据步骤四中的旋转变换关系,将点a1(xa1,ya1,za1)转换回步骤三获得的去除离群点后的盒体工件点云,得到盒体工件的上顶点a2(xa2,ya2,za2):According to the rotation transformation relationship in step 4, convert the point a 1 (x a1 , y a1 , z a1 ) back to the point cloud of the box workpiece obtained in step 3 after removing outliers, and obtain the upper vertex a 2 of the box workpiece (x a2 ,y a2 ,z a2 ):
其中:(xa1,ya1,za1)代表点a1在相机坐标系下的坐标,(xa2,ya2,za2)代表点a2在相机坐标系下的坐标;Among them: (x a1 , y a1 , z a1 ) represent the coordinates of point a 1 in the camera coordinate system, (x a2 , y a2 , z a2 ) represent the coordinates of point a 2 in the camera coordinate system;
同理,得到点b1对应的盒体工件的上顶点b2(xb2,yb2,zb2),得到点c1对应的盒体工件的上顶点c2(xc2,yc2,zc2),得到点d1对应的盒体工件的上顶点d2(xd2,yd2,zd2)。Similarly, the upper vertex b 2 (x b2 , y b2 , z b2 ) of the box workpiece corresponding to point b 1 is obtained, and the upper vertex c 2 (x c2 , y c2 , z of the box workpiece corresponding to point c 1 is obtained) c2 ) to obtain the upper vertex d 2 (x d2 , y d2 , z d2 ) of the box workpiece corresponding to point d 1 .
其他步骤及参数与具体实施方式七相同。Other steps and parameters are the same as in the seventh embodiment.
具体实施方式九:本实施方式与具体实施方式八不同的是:所述步骤八的具体过程为:Embodiment 9: The difference between this embodiment and Embodiment 8 is that the specific process of the step 8 is:
设点a2(xa2,ya2,za2)对应的投影点为a3(xa3,ya3,za3),则向量平行于法向量且由于点a3(xa3,ya3,za3)位于工作台所在的平面上,则联立求解得到:Let the projection point corresponding to point a 2 (x a2 , y a2 , z a2 ) be a 3 (x a3 , y a3 , z a3 ), then the vector parallel to the normal vector And since the point a 3 (x a3 , y a3 , z a3 ) is located on the plane where the workbench is located, the simultaneous solution can be obtained:
其中:(xa3,ya3,za3)代表点a3在相机坐标系下的坐标;Among them: (x a3 , y a3 , z a3 ) represents the coordinates of point a 3 in the camera coordinate system;
同理,得到b2(xb2,yb2,zb2)对应的投影点b3(xb3,yb3,zb3),c2(xc2,yc2,zc2)对应的投影点c3(xc3,yc3,zc3)以及d2(xd2,yd2,zd2)对应的投影点d3(xd3,yd3,zd3),将点a3(xa3,ya3,za3)、b3(xb3,yb3,zb3)、c3(xc3,yc3,zc3)以及d3(xd3,yd3,zd3)作为盒体工件的四个下顶点。In the same way, the projection point b 3 (x b3 , y b3 , z b3 ) corresponding to b 2 (x b2 , y b2 , z b2 ) and the projection point c corresponding to c 2 (x c2 , y c2 , z c2 ) are obtained 3 (x c3 , y c3 , z c3 ) and the projected point d 3 (x d3 , y d3 , z d3 ) corresponding to d 2 (x d2 , y d2 , z d2 ), the point a 3 (x a3 , y a3 , z a3 ), b 3 (x b3 , y b3 , z b3 ), c 3 (x c3 , y c3 , z c3 ) and d 3 (x d3 , y d3 , z d3 ) as four parts of the box workpiece lower vertex.
本实施方式的投影平面是步骤二中获得的平面方程所表示的平面。The projection plane of this embodiment is the plane represented by the plane equation obtained in the second step.
其他步骤及参数与具体实施方式八相同。Other steps and parameters are the same as in the eighth embodiment.
具体实施方式十:本实施方式与具体实施方式一、二、三、四、五、六、七、八或九不同的是:所述步骤九的具体过程为:Embodiment 10: The difference between this embodiment and Embodiment 1, 2, 3, 4, 5, 6, 7, 8 or 9 is that the specific process of the step 9 is:
采用系统标定的方法,确定线激光扫描仪所在机械臂的机械臂坐标系与Kinect2设备的相机坐标系之间的转化关系,将盒体工件顶点的粗略位置转化到机械臂坐标系中,获得转化后的盒体工件顶点位置;The system calibration method is used to determine the transformation relationship between the robot arm coordinate system of the robot arm where the line laser scanner is located and the camera coordinate system of the Kinect2 device, and the rough position of the vertices of the box workpiece is transformed into the robot arm coordinate system to obtain the transformation. The position of the vertices of the rear box workpiece;
将转化后的盒体工件顶点位置依次作为焊缝轨迹的起点与终点,利用线激光扫描仪从焊缝轨迹的起点出发,依次进行扫描,从而得到精确的焊缝点云信息。The vertex positions of the converted box workpiece are taken as the starting point and the end point of the welding seam trajectory in turn, and the line laser scanner is used to scan from the starting point of the welding seam trajectory in turn, so as to obtain accurate welding seam point cloud information.
本实施方式中采用的系统标定方法为手眼标定,手眼标定又分为Eye-in-Hand和Eye-to-Hand两大类。本发明中机械臂与Kinect2设备的位置关系属于Eye-to-Hand(即Kinect2设备安装在机械臂本体外的固定位置,在机械臂工作过程中,Kinect2设备不随机械臂一起运动)。采用Eye-to-Hand的手眼标定方法,可以得到相机坐标系与机械臂坐标系之间的转化关系,进而,可以将机械臂末端移动到盒体工件顶点的粗略位置处。The system calibration method adopted in this embodiment is hand-eye calibration, and hand-eye calibration is further divided into two categories: Eye-in-Hand and Eye-to-Hand. The positional relationship between the robotic arm and the Kinect2 device in the present invention belongs to Eye-to-Hand (that is, the Kinect2 device is installed in a fixed position outside the robotic arm body, and the Kinect2 device does not move with the robotic arm during the working process of the robotic arm). Using the Eye-to-Hand hand-eye calibration method, the transformation relationship between the camera coordinate system and the robotic arm coordinate system can be obtained, and then the end of the robotic arm can be moved to the rough position of the apex of the box workpiece.
其他步骤及参数与具体实施方式一至九相同。Other steps and parameters are the same as those of the specific embodiments 1 to 9.
实施例Example
采用以下实施例验证本发明的有益效果:Adopt the following examples to verify the beneficial effects of the present invention:
本实施例采用所述一种基于三维视觉的盒体工件焊缝的检测方法对工作台平面上的盒体工件焊缝进行检测,按照以下步骤进行:The present embodiment adopts the three-dimensional vision-based detection method for the welding seam of the box body workpiece to detect the welding seam of the box body workpiece on the plane of the workbench, and carries out according to the following steps:
步骤一、采用Kinect2设备采集一帧焊接空间的三维点云数据,如图2所示;Step 1. Use the Kinect2 device to collect a frame of 3D point cloud data in the welding space, as shown in Figure 2;
步骤二、对步骤一中采集得到的三维点云数据,采用随机采样一致性(RANSAC)算法,设置最大迭代次数300和距离阈值dth=0.01(单位:m),分割出工作台平面的点云数据,估计出工作台所在平面方程:-0.51861x+0.59566y+0.613379z-1.01109=0,并对分割后的点云进行欧式聚类,设置欧式距离阈值0.1,分离出盒体工件点云,如图3所示;Step 2: For the 3D point cloud data collected in Step 1, use the random sampling consistency (RANSAC) algorithm, set the maximum number of iterations to 300 and the distance threshold d th = 0.01 (unit: m), and segment the points on the worktable plane Cloud data, estimate the plane equation where the workbench is located: -0.51861x+0.59566y+0.613379z-1.01109=0, and perform Euclidean clustering on the segmented point cloud, set the Euclidean distance threshold to 0.1, and separate the box workpiece point cloud ,As shown in Figure 3;
步骤三、采用统计滤波算法,设置近邻点个数k=50和标准差权重p=1.0,对步骤二中得到的盒体工件点云进行预处理去除离群点;Step 3: Using a statistical filtering algorithm, set the number of adjacent points k=50 and the standard deviation weight p=1.0, and preprocess the box workpiece point cloud obtained in step 2 to remove outliers;
步骤四、对步骤三中得到的去除离群点后的盒体工件点云,通过旋转变换与Kinect2的相机坐标系进行轴向对齐;Step 4. Perform axial alignment with the camera coordinate system of Kinect2 through the rotation transformation of the box workpiece point cloud obtained in step 3 after removing outliers;
步骤五、对步骤四中轴向对齐后的盒体工件点云,求取其整体的轴向包围盒,将包围盒的四个上顶点记为a、b、c、d;Step 5: For the point cloud of the box body workpiece after the axial alignment in Step 4, obtain its overall axial bounding box, and denote the four upper vertices of the bounding box as a, b, c, and d;
步骤六、在包围盒包围的点云中分别寻找与a、b、c、d相距最近的点,分别记为a1、b1、c1、d1,如图4所示;Step 6. Find the points closest to a, b, c, and d in the point cloud surrounded by the bounding box, and denote them as a 1 , b 1 , c 1 , and d 1 , as shown in Figure 4;
步骤七、根据之前的旋转变换关系,将a1、b1、c1、d1转换到去除离群点后的盒体工件所在的点云空间,得到盒体工件的四个上顶点,分别记为a2、b2、c2、d2;Step 7. Convert a 1 , b 1 , c 1 , and d 1 to the point cloud space where the box workpiece after removing outliers is located according to the previous rotation transformation relationship, and obtain the four upper vertices of the box workpiece, respectively. Denoted as a 2 , b 2 , c 2 , d 2 ;
步骤八、将a2、b2、c2、d2分别向工作台所在平面进行投影,从而得到盒体工件的四个下顶点a3、b3、c3、d3,如图5所示;Step 8. Project a 2 , b 2 , c 2 , and d 2 to the plane where the workbench is located, so as to obtain the four lower vertices a 3 , b 3 , c 3 , d 3 of the box workpiece, as shown in Figure 5. Show;
步骤九、将步骤七与步骤八中确定的盒体工件顶点的粗略位置,依次作为焊缝轨迹的起点与终点,用线激光扫描仪从这些定位点出发,依次进行扫描,从而得到精确地焊缝点云信息。Step 9. Take the rough positions of the vertices of the box body determined in Steps 7 and 8 as the starting point and end point of the welding seam trajectory, and use a line laser scanner to scan from these positioning points in turn, so as to obtain accurate welding. Seam point cloud information.
本发明的上述算例仅为详细地说明本发明的计算模型和计算流程,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动,这里无法对所有的实施方式予以穷举,凡是属于本发明的技术方案所引伸出的显而易见的变化或变动仍处于本发明的保护范围之列。The above calculation examples of the present invention are only to illustrate the calculation model and calculation process of the present invention in detail, but are not intended to limit the embodiments of the present invention. For those of ordinary skill in the art, on the basis of the above description, other different forms of changes or changes can also be made, and it is impossible to list all the embodiments here. Obvious changes or modifications are still within the scope of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774689.6A CN110455187B (en) | 2019-08-21 | 2019-08-21 | A three-dimensional vision-based detection method for box workpiece welds |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774689.6A CN110455187B (en) | 2019-08-21 | 2019-08-21 | A three-dimensional vision-based detection method for box workpiece welds |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110455187A CN110455187A (en) | 2019-11-15 |
CN110455187B true CN110455187B (en) | 2020-06-09 |
Family
ID=68488299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910774689.6A Active CN110455187B (en) | 2019-08-21 | 2019-08-21 | A three-dimensional vision-based detection method for box workpiece welds |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110455187B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179221B (en) * | 2019-12-09 | 2024-02-09 | 中建科工集团有限公司 | Method, equipment and storage medium for detecting welding groove |
CN113177983B (en) * | 2021-03-25 | 2022-10-18 | 埃夫特智能装备股份有限公司 | Fillet weld positioning method based on point cloud geometric features |
CN113188442B (en) * | 2021-04-30 | 2022-03-15 | 哈尔滨工业大学 | Multi-angle point cloud measuring tool for seat furniture and splicing method thereof |
CN113793344B (en) * | 2021-08-31 | 2023-09-29 | 无锡砺成智能装备有限公司 | Impeller weld joint positioning method based on three-dimensional point cloud |
CN114170176B (en) * | 2021-12-02 | 2024-04-02 | 南昌大学 | Automatic detection method for welding seam of steel grating based on point cloud |
CN114757878A (en) * | 2022-03-10 | 2022-07-15 | 中国科学院深圳先进技术研究院 | Welding teaching method, device, terminal device and computer-readable storage medium |
CN114419046B (en) * | 2022-03-30 | 2022-06-28 | 季华实验室 | Weld identification method, device, electronic equipment and storage medium for H-beam |
CN114782526B (en) * | 2022-06-22 | 2022-09-02 | 季华实验室 | Calculation method, device, electronic equipment and storage medium for welding seam trajectory of H-beam |
CN115439644B (en) * | 2022-08-19 | 2023-08-08 | 广东领慧数字空间科技有限公司 | Similar point cloud data alignment method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109865919A (en) * | 2019-04-09 | 2019-06-11 | 云南安视智能设备有限公司 | A kind of real-time weld seam tracing system of right angle welding robot line laser |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101961819B (en) * | 2009-07-22 | 2013-10-30 | 中国科学院沈阳自动化研究所 | Device for realizing laser welding and seam tracking and control method thereof |
NL2015839B1 (en) * | 2015-11-23 | 2017-06-07 | Exner Ingenieurstechniek B V | A method of, as well as an industrial robot for performing a processing step at a work piece. |
KR102634535B1 (en) * | 2016-12-29 | 2024-02-07 | 한화오션 주식회사 | Method for recognizing touch teaching point of workpiece using point cloud analysis |
CN107914084A (en) * | 2017-11-16 | 2018-04-17 | 惠州市契贝科技有限公司 | Curved sheets and its method for laser welding, laser welding system |
CN108145314A (en) * | 2017-12-29 | 2018-06-12 | 南京理工大学 | One kind be welded plant machinery people at a high speed identification weld seam Intelligent welding system and method |
CN109541997B (en) * | 2018-11-08 | 2020-06-02 | 东南大学 | A fast and intelligent programming method for spraying robot for plane/approximate plane workpiece |
CN109978865A (en) * | 2019-03-28 | 2019-07-05 | 中核建中核燃料元件有限公司 | A kind of method, apparatus for the detection of nuclear fuel rod face of weld |
-
2019
- 2019-08-21 CN CN201910774689.6A patent/CN110455187B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109865919A (en) * | 2019-04-09 | 2019-06-11 | 云南安视智能设备有限公司 | A kind of real-time weld seam tracing system of right angle welding robot line laser |
Non-Patent Citations (1)
Title |
---|
《Modeling outlier formation in scanning reflective surfaces using a laser stripe scanner》;Yutao Wang等;《Measurement》;20141231;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110455187A (en) | 2019-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110455187B (en) | A three-dimensional vision-based detection method for box workpiece welds | |
CN110116407B (en) | Flexible robot pose measurement method and device | |
CN110014426B (en) | Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera | |
CN111476841B (en) | A method and system for recognition and positioning based on point cloud and image | |
CN101359400B (en) | Process for positioning spatial position of pipe mouth based on vision | |
CN103886593B (en) | A kind of based on three-dimensional point cloud curved surface circular hole detection method | |
CN106643551A (en) | Blade shape rapid scanning device and method | |
CN114055255B (en) | A Path Planning Method for Surface Grinding of Large and Complex Components Based on Real-time Point Cloud | |
CN102636110A (en) | Reference detecting device of automatic drilling and riveting system of airplane components and detecting method thereof | |
CN111598172B (en) | Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion | |
CN113096094A (en) | Three-dimensional object surface defect detection method | |
CN112802070B (en) | Aircraft skin local point cloud positioning method based on multi-descriptor voting | |
CN108994844A (en) | A kind of scaling method and device of sanding operation arm trick relationship | |
CN112361958A (en) | Line laser and mechanical arm calibration method | |
CN116740060B (en) | Dimensional detection method of prefabricated components based on point cloud geometric feature extraction | |
CN116604212A (en) | Robot weld joint identification method and system based on area array structured light | |
CN114434036B (en) | Three-dimensional vision system for gantry robot welding of large ship structural member and operation method | |
CN118721211A (en) | A robot automatic scanning path planning method based on triangular mesh model | |
CN118386236A (en) | Teaching-free robot autonomous welding polishing method based on combination of line laser scanning and stereoscopic vision | |
CN117824502A (en) | Laser three-dimensional scanning-based non-contact detection method for assembling complex assembled workpiece | |
CN118122642A (en) | Leaf spring pressure sorting method and sorting system | |
CN114087989B (en) | Method and system for measuring three-dimensional coordinates of circle center of positioning hole of automobile cylinder workpiece | |
CN116051540A (en) | Method and system for acquiring position and pose of transformer terminals based on point cloud model | |
Ding et al. | Welding Trajectory Optimization Based on the K-means Algorithm in Welding Robot | |
CN114485468B (en) | Multi-axis linkage composite measurement system and full contour automated measurement method for micro parts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |