CN110455187B - Three-dimensional vision-based box workpiece weld joint detection method - Google Patents

Three-dimensional vision-based box workpiece weld joint detection method Download PDF

Info

Publication number
CN110455187B
CN110455187B CN201910774689.6A CN201910774689A CN110455187B CN 110455187 B CN110455187 B CN 110455187B CN 201910774689 A CN201910774689 A CN 201910774689A CN 110455187 B CN110455187 B CN 110455187B
Authority
CN
China
Prior art keywords
point
point cloud
workpiece
box
box body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910774689.6A
Other languages
Chinese (zh)
Other versions
CN110455187A (en
Inventor
高会军
李湛
王喜东
何朕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910774689.6A priority Critical patent/CN110455187B/en
Publication of CN110455187A publication Critical patent/CN110455187A/en
Application granted granted Critical
Publication of CN110455187B publication Critical patent/CN110455187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mechanical Engineering (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Laser Beam Processing (AREA)

Abstract

A method for detecting a welding seam of a box body workpiece based on three-dimensional vision belongs to the technical field of welding seam detection in welding automation. According to the method, a Kinect2 device with low precision and a large measuring range is adopted to scan a welding space, the rough positions of the vertexes of the box body workpieces are determined, then the rough positions are sequentially used as the starting point and the end point of a welding track, and a line laser scanner with high precision is adopted to scan from the positioning points in sequence to obtain accurate welding point cloud information. Compared with the method for detecting the welding seam by only adopting the line laser scanner, the method provided by the invention greatly improves the detection speed while ensuring accurate detection. The invention can be applied to the weld seam detection of box body workpieces.

Description

Three-dimensional vision-based box workpiece weld joint detection method
Technical Field
The invention belongs to the technical field of weld joint detection in welding automation, and particularly relates to a method for detecting a weld joint of a box workpiece.
Background
With the rapid development of advanced manufacturing technology, the robot automated welding technology is gradually replacing manual welding, and becomes the main development direction in the welding field at present. The welding seam detection technology is used as a key technology in automatic welding, and the detection accuracy, efficiency and reliability directly influence the subsequent welding quality.
As a typical welding structure, the box body workpiece has wide application in the fields of aerospace, industry, ship manufacturing and the like, so that the automatic welding of the welding seam of the box body workpiece has high practical application value.
The line laser scanner is adopted to detect the welding seam of the box body workpiece, the detection precision is high, the reliability is good, the method is suitable for complex industrial welding environments, the measurement range is small, and the generated point cloud is dense. Under the condition that the pose of the workpiece is unknown, a large amount of time is consumed for scanning the workpiece to identify the welding seam by the line laser scanner due to the small visual field range of the line laser scanner. Meanwhile, a large amount of point cloud data can be generated in the scanning process, the difficulty of data processing is increased, and the efficiency of welding seam detection is low.
Disclosure of Invention
The invention aims to solve the problem that the existing line laser scanner is low in efficiency of detecting welding seams of box workpieces, and provides a method for detecting the welding seams of the box workpieces based on the combination of Kinect2 and the line laser scanner.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for detecting a box workpiece weld joint based on three-dimensional vision comprises the following steps:
placing a box body workpiece to be detected on a plane workbench, and collecting three-dimensional point cloud data of a frame of box body workpiece and the space of the plane workbench;
secondly, dividing point cloud data of a plane workbench from the three-dimensional point cloud data acquired in the first step by adopting a random sampling consistency method, and acquiring the rest point cloud data and a plane equation of a plane where the workbench is located;
clustering the residual point cloud data, and separating the point cloud data of the box body workpiece;
step three, preprocessing the box body workpiece point cloud data obtained in the step two to obtain box body workpiece point cloud with outliers removed;
step four, axially aligning the box body workpiece point cloud with the outlier removed and a camera coordinate system through rotation transformation to obtain an axially aligned box body workpiece point cloud;
step five, solving the axial bounding box of the box body workpiece point cloud after axial alignment obtained in the step four, and recording four upper vertexes of the axial bounding box as a, b, c and d;
step six, respectively finding out points closest to the vertexes a, b, c and d from the point cloud surrounded by the axial bounding box, wherein: the point closest to the point a is denoted as a1And the point closest to the point b is denoted as b1And the point closest to the point c is denoted as c1And the point closest to the point d is denoted as d1
Step seven, converting the point a through rotation1、b1、c1And d1Converting the point cloud of the box body workpiece obtained in the third step after the outliers are removed to obtain four upper vertexes of the box body workpiece, and respectively recording the four upper vertexes of the box body workpiece as a2、b2、c2And d2
Step eight, respectively connecting the points a2、b2、c2And d2Projecting to the plane of the workbench to obtain four lower vertexes of the box workpiece;
and step nine, taking the four upper vertexes obtained in the step seven and the four lower vertexes obtained in the step eight as rough positions of the vertexes of the box body workpiece, taking the obtained rough positions of the vertexes of the box body workpiece as a starting point and an end point of a welding seam track, and starting from the starting point of the welding seam track by using a line laser scanner to sequentially scan so as to obtain accurate welding seam point cloud information.
The invention has the beneficial effects that: the invention provides a three-dimensional vision-based detection method for box workpiece welding lines, which is characterized in that Kinect2 equipment with lower precision and larger measurement range is adopted to scan a welding space, rough positions of vertexes of box workpieces are determined and then sequentially used as a starting point and an end point of a welding line track, and a high-precision line laser scanner is adopted to sequentially scan from the positioning points to obtain accurate welding line point cloud information. Compared with the method for detecting the welding seam by only adopting the line laser scanner, the method greatly improves the detection speed while ensuring accurate detection, meets the requirements of automatic welding on efficiency, accuracy and reliability, and provides a foundation for subsequent welding work.
Drawings
FIG. 1 is a flow chart of a method for detecting a weld of a box workpiece based on three-dimensional vision according to the present invention;
FIG. 2 is an effect diagram of a frame of welding space point cloud collected by the Kinect2 device in the embodiment;
FIG. 3 is an effect diagram of Euclidean clustering performed on the remaining point clouds after the point clouds on the worktable plane are segmented in the embodiment;
FIG. 4 is an effect diagram of finding the axial bounding box of the point cloud of the box workpiece and finding the points closest to the points a, b, c, and d, respectively; (local enlargement for the display apex)
FIG. 5 shows eight vertexes a of the box workpiece in the embodiment2、b2、c2、d2And a3、b3、c3、d3The effect diagram of (1). (local enlargement for the display apex)
Detailed Description
The first embodiment is as follows: the embodiment is described with reference to fig. 1, and the method for detecting a weld of a box workpiece based on three-dimensional vision in the embodiment includes the following steps:
placing a box body workpiece to be detected on a plane workbench, and collecting three-dimensional point cloud data of a frame of box body workpiece and the space of the plane workbench;
secondly, dividing point cloud data of a plane workbench from the three-dimensional point cloud data acquired in the first step by adopting a random sampling consistency method, and acquiring the rest point cloud data and a plane equation of a plane where the workbench is located;
clustering the residual point cloud data, and separating the point cloud data of the box body workpiece;
thirdly, preprocessing the box body workpiece point cloud data obtained in the second step by adopting a statistical filtering algorithm to obtain box body workpiece point cloud with outliers removed;
step four, axially aligning the box body workpiece point cloud with the outlier removed and a camera coordinate system through rotation transformation to obtain an axially aligned box body workpiece point cloud;
step five, obtaining an axial bounding box (AABB bounding box) of the box body workpiece point cloud after axial alignment obtained in the step four, and recording four upper vertexes of the axial bounding box as a, b, c and d;
step six, respectively finding out points closest to the vertexes a, b, c and d from the point cloud surrounded by the axial bounding box, wherein: the point closest to the point a is denoted as a1And the point closest to the point b is denoted as b1And the point closest to the point c is denoted as c1And is andthe point with the closest distance to the point d is recorded as d1
Step seven, converting the point a through rotation1、b1、c1And d1Converting the point cloud of the box body workpiece obtained in the third step after the outliers are removed to obtain four upper vertexes of the box body workpiece, and respectively recording the four upper vertexes of the box body workpiece as a2、b2、c2And d2
Step eight, respectively connecting the points a2、b2、c2And d2Projecting to the plane of the workbench to obtain four lower vertexes of the box workpiece;
and step nine, taking the four upper vertexes obtained in the step seven and the four lower vertexes obtained in the step eight as rough positions of the vertexes of the box body workpiece, taking the obtained rough positions of the vertexes of the box body workpiece as a starting point and an end point of a welding seam track, and starting from the starting point of the welding seam track by using a line laser scanner to sequentially scan so as to obtain accurate welding seam point cloud information.
According to the embodiment, Kinect2 equipment is adopted to scan the welding space, the rough position of the top point of the box body workpiece is determined, the rough position is sequentially used as the starting point and the end point of the welding seam track, and then line laser scanners are used to scan the welding seam point cloud information sequentially from the positioning points, so that accurate welding seam point cloud information is obtained, meanwhile, the welding seam detection efficiency is improved, and a foundation is provided for subsequent welding work.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: the specific process of the step one is as follows:
placing a box workpiece to be detected on a plane workbench, and collecting three-dimensional point cloud data of a frame of box workpiece and the space of the plane workbench by using Kinect2 equipment; the three-dimensional point cloud data is represented in a camera coordinate system of a Kinect2 device, the camera coordinate system takes the depth camera center of the Kinect2 device as a coordinate origin, the positive direction of an X axis of the camera coordinate system is the right left direction of the Kinect2 irradiation direction, the positive direction of a Y axis is the right upper direction of the Kinect2 irradiation direction, the positive direction of the Z axis is the Kinect2 irradiation direction, and the X axis, the Y axis and the Z axis form a right-hand coordinate system.
The Kinect2 equipment is fixed on the plane workbench obliquely above, and collects three-dimensional point cloud data of a frame of welding space in a overlooking mode.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the second embodiment is different from the first embodiment in that: the specific process of the second step is as follows:
step two, setting a distance threshold value dthAnd a maximum number of iterations;
secondly, setting a plane equation of a plane where the workbench is located under a camera coordinate system as follows: ax + By + Cz + D ═ 0, where: A. b, C and D are both plane equation coefficients; randomly extracting three non-collinear points from the point cloud data acquired in the first step, and estimating a plane equation coefficient by using the extracted three non-collinear points;
step two and step three, after three non-collinear points are extracted, the distance d from each remaining point to the plane equation of the step two is calculated respectivelyi′Wherein: di′Representing the distance from the remaining ith' point to the plane equation of the second step;
if d isi′<dthIf so, the remaining ith' points are points in the plane equation of the second step, and counting the total number of the points in the plane equation of the second step until the distance between each remaining point and the plane equation of the second step is calculated;
step two, repeating the process of the step two and the process of the step two until the set maximum iteration times are reached, performing descending order arrangement on the total number of the points in the plane equation obtained at each time, selecting the plane equation coefficient corresponding to the maximum total number as the optimal estimation, and obtaining the plane equation of the plane where the workbench is located according to the optimal estimation;
deleting the points in the plane equation corresponding to the maximum total number from the point cloud collected in the step one to obtain the residual point cloud data;
and step two, performing Euclidean clustering on the residual point cloud data obtained in the step two, namely checking the Euclidean distance between any two points in the residual point cloud, if the Euclidean distance between the two points is smaller than a given threshold value, determining that the two points belong to the same cluster, and otherwise, determining that the two points do not belong to the same cluster, and separating the box body workpiece point cloud.
Other steps and parameters are the same as those in the second embodiment.
The fourth concrete implementation mode: the third difference between the present embodiment and the specific embodiment is that: the specific process of the third step is as follows:
step three, regarding a certain point (x) in the box body workpiece point cloudj,yj,zj) Finding out k adjacent points (x) of the point from the point cloud of the box workpiecei,yi,zi) Wherein: i is 1,2, …, k; in the searching process, a KD tree is adopted to improve the searching efficiency, and k found neighbor point-to-point (x) are calculatedj,yj,zj) Arithmetic mean of distances
Figure BDA0002174670500000051
I.e. point (x)j,yj,zj) The neighborhood average distance of;
Figure BDA0002174670500000052
step two, calculating the neighborhood average distance of each point in the box workpiece point cloud by adopting the method in the step one, and then calculating the average value mu of the neighborhood average distance of each point in the box workpiece point cloud and the standard deviation sigma of the neighborhood average distance of each point in the box workpiece point cloud, wherein n represents the total number of the midpoints of the box workpiece point cloud;
Figure BDA0002174670500000053
Figure BDA0002174670500000054
step three, setting a confidence interval R of the standard distance as [ mu-p multiplied by sigma, mu + p multiplied by sigma ═]Where p is the weight of the standard deviation, the average distance of the neighborhood
Figure BDA0002174670500000055
And (4) regarding the points outside the confidence interval as outliers, filtering the outliers from the point cloud data of the box workpiece, and obtaining the box workpiece point cloud with the outliers removed.
Point (x)j,yj,zj) K number of neighbors (x)i,yi,zi) The method comprises the following steps: finding and point (x) from point cloud of box workpiecej,yj,zj) K points closest to.
Other steps and parameters are the same as those in the third embodiment.
The fifth concrete implementation mode: the fourth difference between this embodiment and the specific embodiment is that: the specific process of the step four is as follows:
fourthly, aligning the box body workpiece point cloud with the outlier removed with the Z axis of the camera coordinate system:
normal vector of plane equation of the plane where the working table is located
Figure BDA0002174670500000056
Comprises the following steps:
Figure BDA0002174670500000057
direction vector of Z axis of camera coordinate system
Figure BDA0002174670500000058
Comprises the following steps:
Figure BDA0002174670500000059
then by the normal vector
Figure BDA00021746705000000510
Rotation to the direction vector
Figure BDA00021746705000000511
Rotating shaft of
Figure BDA00021746705000000512
And the rotation angle theta is respectively:
Figure BDA00021746705000000513
Figure BDA00021746705000000514
wherein: n isxRepresenting a rotating shaft
Figure BDA00021746705000000515
Component in the X-direction of the camera coordinate system, nyRepresenting a rotating shaft
Figure BDA00021746705000000516
Component in the Y-direction of the camera coordinate system, nzRepresenting a rotating shaft
Figure BDA0002174670500000061
A component along the Z-axis of the camera coordinate system;
obtaining a rotation transformation matrix of the box body workpiece point cloud after removing outliers and the alignment of the Z axis of the camera coordinate system by adopting a Rodrigues rotation formula
Figure BDA0002174670500000062
Comprises the following steps:
Figure BDA0002174670500000063
using a rotation transformation matrix
Figure BDA0002174670500000064
Aligning the box workpiece point cloud S without the outliers with the Z axis of the camera coordinate system to obtain a point cloud S' after the first transformation:
Figure BDA0002174670500000065
step two, aligning the point cloud S' after the first conversion with the X axis and the Y axis of a camera coordinate system:
and (3) solving a side plane equation of the point cloud S' after the first transformation by adopting a random sampling consistency method (the process from the step two to the step two is carried out): a. the1x+B1y+C1z+D10, wherein A1、B1、C1And D1All are coefficients of the equation of the side plane, the normal vector of the equation of the side plane
Figure BDA0002174670500000066
Comprises the following steps:
Figure BDA0002174670500000067
the required rotation angle when the point cloud S' after the first transformation is aligned with the X axis and the Y axis of the camera coordinate system
Figure BDA0002174670500000068
Comprises the following steps:
Figure BDA0002174670500000069
wherein: atan2 (A)1,B1) Representing a point of origin O in the coordinate system of the camera, and a direction (B) in the XOY coordinate plane1,A1) The included angle between the ray of (a) and the positive direction of the X axis;
the first transformed point cloud S' is aligned with the X-axis and Y-axis of the camera coordinate system
Figure BDA00021746705000000610
Comprises the following steps:
Figure BDA00021746705000000611
using a rotation transformation matrix
Figure BDA00021746705000000612
Aligning the point cloud S ' after the first conversion with an X axis and a Y axis of a camera coordinate system to obtain a point cloud S ' after the second conversion, and taking the point cloud S ' after the second conversion as a box body workpiece point cloud after the axial alignment;
Figure BDA00021746705000000613
other steps and parameters are the same as those in the fourth embodiment.
The sixth specific implementation mode: the fifth embodiment is different from the fifth embodiment in that: the concrete process of the step five is as follows:
fifthly, traversing the box body workpiece point clouds after axial alignment, and respectively obtaining the maximum coordinate value and the minimum coordinate value of the box body workpiece point clouds after axial alignment in the X, Y, Z axis direction of a camera coordinate system, wherein: the maximum coordinate value and the minimum coordinate value in the X-axis direction are respectively marked as XmaxAnd xminAnd the maximum coordinate value and the minimum coordinate value in the Y-axis direction are respectively recorded as YmaxAnd yminThe maximum coordinate value and the minimum coordinate value in the Z-axis direction are respectively denoted as ZmaxAnd zmin
Step two, combining the maximum coordinate value and the minimum coordinate value in the X, Y, Z axial direction of the step one, and taking 8 different groups of combinations as eight vertexes of an axial bounding box of the box body workpiece point cloud after axial alignment, so as to construct the axial bounding box of the box body workpiece point cloud according to the eight vertexes; the coordinates of the four upper vertexes of the axial bounding box are respectively as follows: a (x)max,ymin,zmax)、b(xmax,ymax,zmax)、c(xmin,ymax,zmax) And d (x)min,ymin,zmax)。
The other steps and parameters are the same as those in the fifth embodiment.
The seventh embodiment: the sixth embodiment is different from the sixth embodiment in that: and sixthly, respectively finding out points closest to the vertexes a, b, c and d from the point cloud surrounded by the axial bounding box by adopting a KD tree mode.
Other steps and parameters are the same as those in the sixth embodiment.
In the process of carrying out European clustering, statistical filtering and searching the closest point, the method adopts a KD tree structure to improve the searching efficiency and accelerate the program running speed.
The specific implementation mode is eight: the seventh embodiment is different from the seventh embodiment in that: the concrete process of the seventh step is as follows:
according to the rotation transformation relation in step four, the point a is connected1(xa1,ya1,za1) The point cloud of the box body workpiece with the outlier removed, which is obtained in the third step, is converted back to obtain the upper vertex a of the box body workpiece2(xa2,ya2,za2):
Figure BDA0002174670500000071
Wherein: (x)a1,ya1,za1) Representative point a1Coordinates in the camera coordinate system, (x)a2,ya2,za2) Representative point a2Coordinates under a camera coordinate system;
in the same way, the point b is obtained1Upper vertex b of corresponding box workpiece2(xb2,yb2,zb2) To obtain a point c1Upper vertex c of corresponding box workpiece2(xc2,yc2,zc2) To obtain a point d1Upper vertex d of corresponding box workpiece2(xd2,yd2,zd2)。
Other steps and parameters are the same as those in the seventh embodiment.
The specific implementation method nine: the eighth embodiment is different from the eighth embodiment in that: the concrete process of the step eight is as follows:
setting point a2(xa2,ya2,za2) The corresponding projection point is a3(xa3,ya3,za3) Then vector of
Figure BDA0002174670500000081
Parallel to the normal vector
Figure BDA0002174670500000082
And due to point a3(xa3,ya3,za3) And (3) on the plane of the workbench, simultaneously solving to obtain:
Figure BDA0002174670500000083
wherein: (x)a3,ya3,za3) Representative point a3Coordinates under a camera coordinate system;
in the same way, get b2(xb2,yb2,zb2) Corresponding projection point b3(xb3,yb3,zb3),c2(xc2,yc2,zc2) Corresponding projection point c3(xc3,yc3,zc3) And d2(xd2,yd2,zd2) Corresponding projection point d3(xd3,yd3,zd3) Point a is pointed out3(xa3,ya3,za3)、b3(xb3,yb3,zb3)、c3(xc3,yc3,zc3) And d3(xd3,yd3,zd3) As the four lower vertices of the box workpiece.
The projection plane of the present embodiment is a plane expressed by the plane equation obtained in step two.
Other steps and parameters are the same as those in the eighth embodiment.
The detailed implementation mode is ten: this embodiment differs from the first, second, third, fourth, fifth, sixth, seventh, eighth, or ninth embodiment in that: the concrete process of the ninth step is as follows:
determining a conversion relation between a mechanical arm coordinate system of a mechanical arm where the line laser scanner is located and a camera coordinate system of Kinect2 equipment by adopting a system calibration method, converting the rough position of the top point of the box workpiece into the mechanical arm coordinate system, and obtaining the converted top point position of the box workpiece;
and sequentially taking the converted top point positions of the box body workpieces as the starting point and the end point of the welding seam track, and sequentially scanning from the starting point of the welding seam track by using a line laser scanner so as to obtain accurate welding seam point cloud information.
The system calibration method adopted in the embodiment is Hand-Eye calibration which is divided into two categories, namely Eye-in-Hand and Eye-to-Hand. The position relationship between the mechanical arm and the Kinect2 device belongs to Eye-to-Hand (namely the Kinect2 device is arranged at a fixed position outside the mechanical arm body, and the Kinect2 device does not move along with the mechanical arm in the working process of the mechanical arm). By adopting the Eye-to-Hand calibration method, the conversion relation between the camera coordinate system and the mechanical arm coordinate system can be obtained, and further, the tail end of the mechanical arm can be moved to the rough position of the vertex of the box workpiece.
Other steps and parameters are the same as those in the first to ninth embodiments.
Examples
The following examples were used to demonstrate the beneficial effects of the present invention:
in this embodiment, the method for detecting the welding seam of the box workpiece based on three-dimensional vision is adopted to detect the welding seam of the box workpiece on the plane of the workbench, and the method is performed according to the following steps:
step one, collecting three-dimensional point cloud data of a frame of welding space by using Kinect2 equipment, as shown in FIG. 2;
step two, setting the maximum iteration times 300 and the distance threshold value d for the three-dimensional point cloud data acquired in the step one by adopting a random sample consensus (RANSAC) algorithmthDividing point cloud data of a workbench plane to obtain a plane equation of the workbench, wherein the plane equation is as follows: -0.51861x +0.59566y +0.613379z-1.01109 is 0, and the segmented point cloud is subjected to Euclidean clustering, an Euclidean distance threshold value of 0.1 is set, and a box workpiece point cloud is separated, as shown in FIG. 3;
thirdly, a statistical filtering algorithm is adopted, the number k of the adjacent points is set to be 50, the standard deviation weight p is set to be 1.0, and the box body workpiece point cloud obtained in the second step is preprocessed to remove outliers;
step four, axially aligning the box body workpiece point cloud obtained in the step three after the outlier is removed with a camera coordinate system of Kinect2 through rotation transformation;
step five, solving the box body workpiece point cloud after axial alignment in the step four to obtain an integral axial bounding box, and recording four upper vertexes of the bounding box as a, b, c and d;
step six, respectively searching points which are closest to a, b, c and d in the point cloud surrounded by the bounding box and respectively recording the points as a1、b1、c1、d1As shown in fig. 4;
step seven, according to the previous rotation transformation relation, converting a1、b1、c1、d1Converting the point cloud space of the box body workpiece with the outlier removed to obtain four upper vertexes of the box body workpiece, and respectively recording the four upper vertexes as a2、b2、c2、d2
Step eight, mixing a2、b2、c2、d2Respectively projecting to the plane of the worktable, thereby obtaining four lower vertexes a of the box workpiece3、b3、c3、d3As shown in fig. 5;
and step nine, taking the rough positions of the vertexes of the box body workpieces determined in the step seven and the step eight as the starting point and the end point of the welding seam track in sequence, and starting from the positioning points by using a line laser scanner to scan in sequence so as to obtain accurate welding seam point cloud information.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (9)

1. A method for detecting a box workpiece weld joint based on three-dimensional vision is characterized by comprising the following steps:
placing a box body workpiece to be detected on a plane workbench, and collecting three-dimensional point cloud data of a frame of box body workpiece and the space of the plane workbench;
secondly, dividing point cloud data of a plane workbench from the three-dimensional point cloud data acquired in the first step to obtain the rest point cloud data and a plane equation of the plane where the workbench is located;
clustering the residual point cloud data, and separating the point cloud data of the box body workpiece;
step three, preprocessing the box body workpiece point cloud data obtained in the step two to obtain box body workpiece point cloud with outliers removed;
step four, axially aligning the box body workpiece point cloud with the outlier removed and a camera coordinate system through rotation transformation to obtain an axially aligned box body workpiece point cloud;
step five, solving the axial bounding box of the box body workpiece point cloud after axial alignment obtained in the step four, and recording four upper vertexes of the axial bounding box as a, b, c and d;
step six, respectively finding out points closest to the vertexes a, b, c and d from the point cloud surrounded by the axial bounding box, wherein: the point closest to the point a is denoted as a1And the point closest to the point b is denoted as b1And the point closest to the point c is denoted as c1And the point closest to the point d is denoted as d1
Step seven, converting the point a through rotation1、b1、c1And d1Converting the point cloud of the box body workpiece obtained in the third step after the outliers are removed to obtain four upper vertexes of the box body workpiece, and respectively recording the four upper vertexes of the box body workpiece as a2、b2、c2And d2
Step eight, respectively connecting the points a2、b2、c2And d2Projecting to the plane of the workbench to obtain four lower vertexes of the box workpiece;
step nine, taking the four upper vertexes obtained in the step seven and the four lower vertexes obtained in the step eight as rough positions of vertexes of the box body workpiece, taking the obtained rough positions of the vertexes of the box body workpiece as a starting point and an end point of a welding seam track, and starting from the starting point of the welding seam track by using a line laser scanner to sequentially scan so as to obtain accurate welding seam point cloud information;
the concrete process of the ninth step is as follows:
determining a conversion relation between a mechanical arm coordinate system of a mechanical arm where the line laser scanner is located and a camera coordinate system of Kinect2 equipment by adopting a system calibration method, converting the rough position of the top point of the box workpiece into the mechanical arm coordinate system, and obtaining the converted top point position of the box workpiece;
and sequentially taking the converted top point positions of the box body workpieces as the starting point and the end point of the welding seam track, and sequentially scanning from the starting point of the welding seam track by using a line laser scanner so as to obtain accurate welding seam point cloud information.
2. The method for detecting the weld joint of the box workpiece based on the three-dimensional vision according to claim 1, wherein the specific process of the first step is as follows:
placing a box workpiece to be detected on a plane workbench, and collecting three-dimensional point cloud data of a frame of box workpiece and the space of the plane workbench by using Kinect2 equipment; the three-dimensional point cloud data is represented in the camera coordinate system of the Kinect2 device.
3. The method for detecting the weld joint of the box workpiece based on the three-dimensional vision according to claim 2, wherein the specific process of the second step is as follows:
step two, setting a distance threshold value dthAnd a maximum number of iterations;
step two, setting a plane equation of a plane where the workbench is located as follows: ax + By + Cz + D ═ 0, where: A. b, C and D are both plane equation coefficients; randomly extracting three non-collinear points from the point cloud data acquired in the first step, and estimating a plane equation coefficient by using the extracted three non-collinear points;
step two and step three, after three non-collinear points are extracted, the remaining points are respectively calculated to the step twoDistance d of the plane equation ofi′Wherein: di′Representing the distance from the remaining ith' point to the plane equation of the second step;
if d isi′<dthIf so, the remaining ith' points are points in the plane equation of the second step, and counting the total number of the points in the plane equation of the second step until the distance between each remaining point and the plane equation of the second step is calculated;
step two, repeating the process of the step two and the process of the step two until the set maximum iteration times are reached, performing descending order arrangement on the total number of the points in the plane equation obtained at each time, selecting the plane equation coefficient corresponding to the maximum total number as the optimal estimation, and obtaining the plane equation of the plane where the workbench is located according to the optimal estimation;
deleting the points in the plane equation corresponding to the maximum total number from the point cloud collected in the step one to obtain the residual point cloud data;
and step two, performing Euclidean clustering on the residual point cloud data obtained in the step two to separate the box workpiece point cloud.
4. The method for detecting the weld joint of the box workpiece based on the three-dimensional vision according to claim 3, wherein the specific process of the third step is as follows:
step three, regarding a certain point (x) in the box body workpiece point cloudj,yj,zj) Finding out k adjacent points (x) of the point from the point cloud of the box workpiecei,yi,zi) Wherein: i is 1,2, …, k; and calculates the k found neighbors to point (x)j,yj,zj) Arithmetic mean of distances
Figure FDA0002399382040000021
I.e. point (x)j,yj,zj) The neighborhood average distance of;
Figure FDA0002399382040000022
step two, calculating the neighborhood average distance of each point in the box workpiece point cloud by adopting the method in the step one, and then calculating the average value mu of the neighborhood average distance of each point in the box workpiece point cloud and the standard deviation sigma of the neighborhood average distance of each point in the box workpiece point cloud, wherein n represents the total number of the midpoints of the box workpiece point cloud;
Figure FDA0002399382040000031
Figure FDA0002399382040000032
step three, setting a confidence interval R of the standard distance as [ mu-p multiplied by sigma, mu + p multiplied by sigma ═]Where p is the weight of the standard deviation, the average distance of the neighborhood
Figure FDA0002399382040000033
And (4) regarding the points outside the confidence interval as outliers, filtering the outliers from the point cloud data of the box workpiece, and obtaining the box workpiece point cloud with the outliers removed.
5. The method for detecting the weld joint of the box workpiece based on the three-dimensional vision according to claim 4, wherein the specific process of the fourth step is as follows:
fourthly, aligning the box body workpiece point cloud with the outlier removed with the Z axis of the camera coordinate system:
normal vector of plane equation of the plane where the working table is located
Figure FDA0002399382040000034
Comprises the following steps:
Figure FDA0002399382040000035
direction vector of Z axis of camera coordinate system
Figure FDA0002399382040000036
Comprises the following steps:
Figure FDA0002399382040000037
then by the normal vector
Figure FDA0002399382040000038
Rotation to the direction vector
Figure FDA0002399382040000039
Rotating shaft of
Figure FDA00023993820400000310
And the rotation angle theta is respectively:
Figure FDA00023993820400000311
Figure FDA00023993820400000312
wherein: n isxRepresenting a rotating shaft
Figure FDA00023993820400000313
Component in the X-direction of the camera coordinate system, nyRepresenting a rotating shaft
Figure FDA00023993820400000314
Component in the Y-direction of the camera coordinate system, nzRepresenting a rotating shaft
Figure FDA00023993820400000315
A component along the Z-axis of the camera coordinate system;
obtaining a rotation transformation matrix of the box body workpiece point cloud after removing outliers and the alignment of the Z axis of the camera coordinate system by adopting a Rodrigues rotation formula
Figure FDA00023993820400000316
Comprises the following steps:
Figure FDA00023993820400000317
using a rotation transformation matrix
Figure FDA00023993820400000318
Aligning the box workpiece point cloud S without the outliers with the Z axis of the camera coordinate system to obtain a point cloud S' after the first transformation:
Figure FDA00023993820400000319
step two, aligning the point cloud S' after the first conversion with the X axis and the Y axis of a camera coordinate system:
solving a side plane equation of the point cloud S' after the first transformation by adopting a random sampling consistency method: a. the1x+B1y+C1z+D10, wherein A1、B1、C1And D1All are coefficients of the equation of the side plane, the normal vector of the equation of the side plane
Figure FDA0002399382040000041
Comprises the following steps:
Figure FDA0002399382040000042
the required rotation angle when the point cloud S' after the first transformation is aligned with the X axis and the Y axis of the camera coordinate system
Figure FDA0002399382040000043
Comprises the following steps:
Figure FDA0002399382040000044
the first transformed point cloud S' is aligned with the X-axis and Y-axis of the camera coordinate system
Figure FDA0002399382040000045
Comprises the following steps:
Figure FDA0002399382040000046
using a rotation transformation matrix
Figure FDA0002399382040000047
Aligning the point cloud S ' after the first conversion with an X axis and a Y axis of a camera coordinate system to obtain a point cloud S ' after the second conversion, and taking the point cloud S ' after the second conversion as a box body workpiece point cloud after the axial alignment;
Figure FDA0002399382040000048
6. the method for detecting the welding seam of the box workpiece based on the three-dimensional vision as claimed in claim 5, wherein the concrete process of the fifth step is as follows:
fifthly, traversing the box body workpiece point clouds after axial alignment, and respectively obtaining the maximum coordinate value and the minimum coordinate value of the box body workpiece point clouds after axial alignment in the X, Y, Z axis direction of a camera coordinate system, wherein: the maximum coordinate value and the minimum coordinate value in the X-axis direction are respectively marked as XmaxAnd xminAnd the maximum coordinate value and the minimum coordinate value in the Y-axis direction are respectively recorded as YmaxAnd yminThe maximum coordinate value and the minimum coordinate value in the Z-axis direction are respectively denoted as ZmaxAnd zmin
Step two, combining the maximum coordinate value and the minimum coordinate value in the X, Y, Z axial direction in the step one, and taking 8 different groups of combinations as eight vertexes of an axial bounding box of the box body workpiece point cloud after axial alignment, wherein: the coordinates of the four upper vertices of the axial bounding box are: a (x)max,ymin,zmax)、b(xmax,ymax,zmax)、c(xmin,ymax,zmax) And d (x)min,ymin,zmax)。
7. The method for detecting the weld of the box workpiece based on the three-dimensional vision as claimed in claim 6, wherein the point cloud surrounded by the axial bounding box in the sixth step is found out by using a KD tree to find out the points closest to the vertices a, b, c, and d.
8. The method for detecting the weld joint of the box workpiece based on the three-dimensional vision according to claim 7, wherein the specific process of the seventh step is as follows:
point a1(xa1,ya1,za1) The point cloud of the box body workpiece with the outlier removed, which is obtained in the third step, is converted back to obtain the upper vertex a of the box body workpiece2(xa2,ya2,za2):
Figure FDA0002399382040000051
Wherein: (x)a1,ya1,za1) Representative point a1Coordinates in the camera coordinate system, (x)a2,ya2,za2) Representative point a2Coordinates under a camera coordinate system;
in the same way, the point b is obtained1Upper vertex b of corresponding box workpiece2(xb2,yb2,zb2) To obtain a point c1Upper vertex c of corresponding box workpiece2(xc2,yc2,zc2) To obtain a point d1Upper vertex d of corresponding box workpiece2(xd2,yd2,zd2)。
9. The method for detecting the welding seam of the box workpiece based on the three-dimensional vision as claimed in claim 8, wherein the specific process of the step eight is as follows:
setting point a2(xa2,ya2,za2) The corresponding projection point is a3(xa3,ya3,za3) Then vector of
Figure FDA0002399382040000052
Parallel to the normal vector
Figure FDA0002399382040000053
And due to point a3(xa3,ya3,za3) And (3) on the plane of the workbench, simultaneously solving to obtain:
Figure FDA0002399382040000054
wherein: (x)a3,ya3,za3) Representative point a3Coordinates under a camera coordinate system;
in the same way, get b2(xb2,yb2,zb2) Corresponding projection point b3(xb3,yb3,zb3),c2(xc2,yc2,zc2) Corresponding projection point c3(xc3,yc3,zc3) And d2(xd2,yd2,zd2) Corresponding projection point d3(xd3,yd3,zd3) Point a is pointed out3(xa3,ya3,za3)、b3(xb3,yb3,zb3)、c3(xc3,yc3,zc3) And d3(xd3,yd3,zd3) As the four lower vertices of the box workpiece.
CN201910774689.6A 2019-08-21 2019-08-21 Three-dimensional vision-based box workpiece weld joint detection method Active CN110455187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910774689.6A CN110455187B (en) 2019-08-21 2019-08-21 Three-dimensional vision-based box workpiece weld joint detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910774689.6A CN110455187B (en) 2019-08-21 2019-08-21 Three-dimensional vision-based box workpiece weld joint detection method

Publications (2)

Publication Number Publication Date
CN110455187A CN110455187A (en) 2019-11-15
CN110455187B true CN110455187B (en) 2020-06-09

Family

ID=68488299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910774689.6A Active CN110455187B (en) 2019-08-21 2019-08-21 Three-dimensional vision-based box workpiece weld joint detection method

Country Status (1)

Country Link
CN (1) CN110455187B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179221B (en) * 2019-12-09 2024-02-09 中建科工集团有限公司 Method, equipment and storage medium for detecting welding groove
CN113177983B (en) * 2021-03-25 2022-10-18 埃夫特智能装备股份有限公司 Fillet weld positioning method based on point cloud geometric features
CN113188442B (en) * 2021-04-30 2022-03-15 哈尔滨工业大学 Multi-angle point cloud measuring tool for seat furniture and splicing method thereof
CN113793344B (en) * 2021-08-31 2023-09-29 无锡砺成智能装备有限公司 Impeller weld joint positioning method based on three-dimensional point cloud
CN114170176B (en) * 2021-12-02 2024-04-02 南昌大学 Automatic detection method for welding seam of steel grating based on point cloud
CN114419046B (en) * 2022-03-30 2022-06-28 季华实验室 Method and device for recognizing weld of H-shaped steel, electronic equipment and storage medium
CN114782526B (en) * 2022-06-22 2022-09-02 季华实验室 Welding seam track calculation method and device of H-shaped steel, electronic equipment and storage medium
CN115439644B (en) * 2022-08-19 2023-08-08 广东领慧数字空间科技有限公司 Similar point cloud data alignment method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109865919A (en) * 2019-04-09 2019-06-11 云南安视智能设备有限公司 A kind of real-time weld seam tracing system of right angle welding robot line laser

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101961819B (en) * 2009-07-22 2013-10-30 中国科学院沈阳自动化研究所 Device for realizing laser welding and seam tracking and control method thereof
NL2015839B1 (en) * 2015-11-23 2017-06-07 Exner Ingenieurstechniek B V A method of, as well as an industrial robot for performing a processing step at a work piece.
KR102634535B1 (en) * 2016-12-29 2024-02-07 한화오션 주식회사 Method for recognizing touch teaching point of workpiece using point cloud analysis
CN107914084A (en) * 2017-11-16 2018-04-17 惠州市契贝科技有限公司 Curved sheets and its method for laser welding, laser welding system
CN108145314A (en) * 2017-12-29 2018-06-12 南京理工大学 One kind be welded plant machinery people at a high speed identification weld seam Intelligent welding system and method
CN109541997B (en) * 2018-11-08 2020-06-02 东南大学 Spraying robot rapid intelligent programming method for plane/approximate plane workpiece
CN109978865A (en) * 2019-03-28 2019-07-05 中核建中核燃料元件有限公司 A kind of method, apparatus for the detection of nuclear fuel rod face of weld

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109865919A (en) * 2019-04-09 2019-06-11 云南安视智能设备有限公司 A kind of real-time weld seam tracing system of right angle welding robot line laser

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Modeling outlier formation in scanning reflective surfaces using a laser stripe scanner》;Yutao Wang等;《Measurement》;20141231;全文 *

Also Published As

Publication number Publication date
CN110455187A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110455187B (en) Three-dimensional vision-based box workpiece weld joint detection method
CN110370286B (en) Method for identifying rigid body space position of dead axle motion based on industrial robot and monocular camera
CN110014426B (en) Method for grabbing symmetrically-shaped workpieces at high precision by using low-precision depth camera
CN106643551B (en) A kind of quick scanning means of blade shape and method
CN101359400B (en) Process for positioning spatial position of pipe mouth based on vision
CN103192397B (en) Vision robot's off-line programing method and system
CN107214703A (en) A kind of robot self-calibrating method of view-based access control model auxiliary positioning
CN114571153A (en) Weld joint identification and robot weld joint tracking method based on 3D point cloud
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN116402866A (en) Point cloud-based part digital twin geometric modeling and error assessment method and system
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
CN113532277B (en) Method and system for detecting plate-shaped irregular curved surface workpiece
CN104766333A (en) Vehicle door point welding robot path correction method based on stereoscopic vision
CN112017293A (en) Method for measuring geometric initial defects of round steel pipe
CN116740060B (en) Method for detecting size of prefabricated part based on point cloud geometric feature extraction
CN118015004B (en) Laser cutting scanning system and method
CN113799130B (en) Robot pose calibration method in man-machine cooperation assembly
CN113340215B (en) Plane offset on-line measuring method based on parallel constraint
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN117824502A (en) Laser three-dimensional scanning-based non-contact detection method for assembling complex assembled workpiece
Wang et al. A binocular vision method for precise hole recognition in satellite assembly systems
CN110853103B (en) Data set manufacturing method for deep learning attitude estimation
CN106447781B (en) It is a kind of based on Minkowski and towards the collision checking method of automatic assembling
Sun et al. Precision work-piece detection and measurement combining top-down and bottom-up saliency
CN116385356A (en) Method and system for extracting regular hexagonal hole features based on laser vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant