CN113192029A - Welding seam identification method based on ToF - Google Patents

Welding seam identification method based on ToF Download PDF

Info

Publication number
CN113192029A
CN113192029A CN202110472422.9A CN202110472422A CN113192029A CN 113192029 A CN113192029 A CN 113192029A CN 202110472422 A CN202110472422 A CN 202110472422A CN 113192029 A CN113192029 A CN 113192029A
Authority
CN
China
Prior art keywords
image
coordinate system
weld
welding seam
tof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110472422.9A
Other languages
Chinese (zh)
Inventor
商亮亮
张�浩
泮佳俊
张帆
李佩齐
刘腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202110472422.9A priority Critical patent/CN113192029A/en
Publication of CN113192029A publication Critical patent/CN113192029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a welding line identification method based on ToF, which comprises the following steps: acquiring an original weld image; preprocessing the amplitude image; carrying out local threshold binarization processing on the preprocessed amplitude image to obtain a binarized image; extracting edge features of the binary image; performing radon transformation on the edge image, and identifying a weld image based on appearance conditions of the weld; acquiring two-dimensional information of the welding seam through the identified welding seam image, and solving a three-dimensional coordinate of the welding seam by combining with corresponding depth information; constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system; converting the three-dimensional coordinates of the welding line into space coordinates under a world coordinate system according to the conversion relation; the method can improve the efficiency and accuracy of the weld joint identification.

Description

Welding seam identification method based on ToF
Technical Field
The invention relates to the technical field of weld joint identification, in particular to a weld joint identification method based on ToF.
Background
Welding is one of the essential basic manufacturing technologies in the machining industry, and is widely applied to the modern manufacturing industry, such as the fields of marine ships, aerospace, rail transit and the like. Nowadays, the welding process enters the intelligent manufacturing era, the traditional manual welding can not meet the requirements of the processing precision and efficiency of related equipment, and the automation and the intellectualization of the welding technology become the mainstream of the market.
The common automatic welding method is mainly 'manual teaching-memory reproduction', and technicians are still required to control a welding robot to complete welding through a teaching device. I.e. by recording the taught path or trajectory, the welding robot can repeatedly complete the operation, but the type and position of the weld needs to be determined before the welding starts. Therefore, when the number of welding seams is large or the welding process is complex, the welding requirements are difficult to meet through manual teaching operation.
In order to realize high-precision intelligent welding, an automatic welding method often needs to be matched with a welding seam identification technology. Common weld joint identification technologies are classified into a contact type and a non-contact type, wherein the non-contact type weld joint identification technology based on machine vision is widely applied in industrial production. However, the non-contact weld joint recognition technology mainly represented by machine vision is often complex, image noise is removed in a multi-filtering mode, a complex weld joint feature extraction algorithm is required to recognize a weld joint, and accurate three-dimensional information of the weld joint is difficult to obtain. However, the contact-type weld joint identification technology is not widely used due to the defects of low precision, high failure rate, incapability of distinguishing obstacles on the surface of the weld joint and the like.
Conventional three-dimensional imaging techniques include binocular stereo vision techniques and structured light techniques. Patent CN 112059363A discloses unmanned wall climbing welding robot based on binocular vision measurement and welding method thereof, and this measuring method can accurate guide welding robot to reach the welding seam position. Although the binocular stereo vision technology has high precision and low cost, the calculated amount is large, the requirement on the algorithm is high, and the use environment is limited to a certain extent; the main challenge is how to solve the correspondence problem, i.e. how to find the same point in another camera, given a point in the image. Before the corresponding relationship is established, the difference cannot be accurately determined, and thus the three-dimensional information of the target cannot be determined. In addition, patent CN 108335286 a also discloses an online weld forming visual inspection method based on double-line structured light. The structured light method actively projects an optical signal with specific characteristics to the surface of a measured object through a projector. The optical signal with specific characteristics is deformed to a certain extent due to the concave-convex condition of the surface of the object, namely modulated, then the modulated optical signal is collected again by the camera, and then the depth information of the target is determined by the triangulation principle. The time-of-flight ranging is applied to an ultrasonic range finder at the earliest, and the principle is as follows: the infrared ray which can be modulated is emitted to the object to be measured, the infrared ray is received by the receiving end, and the depth information of the object to be measured is rapidly and accurately obtained by analyzing the phase difference and the time difference between the emitted ray and the received ray. And the three-dimensional information of the object can be obtained by combining the shooting of the traditional camera. With the development of precision electronic technology and microelectronic technology, the problems of low resolution, more noise and high cost of a camera based on the ToF technology are solved, and then the flight ranging method based on high-performance photoelectrons is widely applied to various fields. Such as robot navigation, autopilot, super-resolution imaging, non-visual field imaging, and industrial detection, and in addition, time-of-flight ranging shows great potential in the field of machine vision. The method is known by looking up a large amount of literature data, and the weld joint identification technology based on the time-of-flight ranging method is developed and applied in China, and has the advantage that after the central line of the weld joint is obtained, the depth information of the corresponding weld joint can be directly determined in the depth image.
Disclosure of Invention
In view of the above, the present invention is directed to a ToF-based weld joint identification method, which can improve the efficiency and accuracy of weld joint identification.
In order to achieve the purpose, the invention provides the following technical scheme:
a welding seam identification method based on ToF comprises the following steps:
step S1, acquiring an original weld image of the weldment to be processed through a camera based on the ToF technology, wherein the original weld image comprises; an amplitude image and a depth image;
step S2, preprocessing the amplitude image obtained in the step S1 to obtain a preprocessed amplitude image;
step S3, local threshold value binarization processing is carried out on the amplitude image after preprocessing obtained in the step S2 to obtain a corresponding binarized image;
s4, extracting the edge characteristics of the binarized image through a Gabor filter, and acquiring an edge image of a welding seam;
step S5, performing radon transformation on the edge image acquired in the step S4 to obtain a horizontally corrected edge image, and identifying a weld image based on the appearance condition of the weld;
s6, acquiring two-dimensional information of the welding seam through the welding seam image identified in the S5, and solving a three-dimensional coordinate of the welding seam by combining the depth information acquired in the S1;
step S7, constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
and step S8, converting the three-dimensional coordinates of the welding seam acquired in the step S6 into space coordinates in the world coordinate system according to the conversion relation between the pixel coordinate system and the world coordinate system in the step S7.
Further, the sensor in the camera based on the ToF technology is an array type.
Further, in the step S2, the preprocessing includes: the amplitude image is clipped to obtain an image containing a weld region, and the image is subjected to filtering processing.
Further, the step S4 specifically includes:
s401, passing through the imaginary part of the Gabor filter function, wherein the expression is shown as formula (1), 4 scales are selected, f is 0.15, 0.3, 0.15 and 0.6 respectively, 6 directions are selected, and theta is 0,
Figure BDA0003045996640000031
Pi and
Figure BDA0003045996640000032
constructing 24 filter banks;
Figure BDA0003045996640000033
in the formula (1), x is a Gaussian scale in the main shaft direction; y is a gaussian scale in which the principal axis directions are orthogonal, f is a filter center frequency, θ is a rotation angle of the gaussian principal axis, η and γ are constants, and x ═ xcos θ + ysin θ, y ═ xsin θ + ycos θ.
S402, performing space domain convolution on the 24 filter banks obtained in the S401 and the binary image obtained in the S3 to obtain preliminary edge detection images with 4 scales and 6 directions;
step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, comparing two points near the corresponding image according to the detection direction, if the two points are the maximum value, reserving the two points, and if the two points are not the maximum value, changing the two points to 0;
and S404, fusing the preliminary edge detection images of 4 scales and 6 directions, and then performing edge connection on the fused images to obtain edge images of the welding seams.
Further, the step S7 specifically includes:
firstly, converting a world coordinate system into a camera coordinate system through rigid body transformation;
then, converting the camera coordinate system into an image coordinate system through perspective projection;
and finally, discretizing the image coordinate system to obtain a pixel coordinate system.
The invention has the beneficial effects that:
compared with a contact-based weld joint identification method, the method provided by the invention has the advantages that the algorithm is simple, the identification speed is higher, the complex weld joint can be accurately identified in a shorter time, and the identification precision is higher. And the ToF camera can directly acquire the depth information of the welding seam while acquiring the welding seam image, so that the target can be quickly and accurately reconstructed in three dimensions compared with a binocular vision method and a structured light method.
Drawings
Fig. 1 is a schematic diagram of the conversion from the world coordinate system to the camera coordinate system in embodiment 1.
Fig. 2 is a schematic diagram of conversion from the camera coordinate system to the image coordinate system in embodiment 1.
Fig. 3 is a schematic diagram of conversion from an image coordinate system to a pixel coordinate system in embodiment 1.
Fig. 4 is an original weld image obtained by using a camera based on the ToF technique in example 1.
Fig. 5 is a point cloud image of the weld centerline finally obtained in example 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 to 5, the present embodiment provides a ToF-based weld joint identification method, including the following steps:
s1, acquiring an original weld image of the weldment to be processed through a camera based on the ToF technology, wherein the original weld image comprises an image of the original weld; an amplitude image and a depth image;
specifically, the original weld image is a weld image before or after welding, in this embodiment, a camera based on the ToF technology is used for direct shooting and acquisition, and sensors in the camera based on the ToF technology are in an array type, so that target three-dimensional information can be rapidly acquired in the process of acquiring an image of each frame; a sensor in the camera based on the ToF technology can emit modulated infrared light, the light is subjected to diffuse reflection after encountering a weld joint, and a receiving end can obtain corresponding weld joint depth information by analyzing the phase difference or time difference between emitted light and received light, so that the depth information, point cloud information, gray scale information and the like of a target image can be obtained.
Step S2, preprocessing the amplitude image obtained in the step S1 to obtain a preprocessed amplitude image;
specifically, the pretreatment comprises: the amplitude image is cropped to obtain an image containing a weld joint area, and the image is subjected to filtering processing, wherein the purpose of the filtering processing is to weaken the influence of light on the amplitude image in the environment.
More specifically, the weld acquired by the camera based on the ToF technology contains various data information and is easily affected by light in the environment. The acquired image contains a large amount of noise and image information interference irrelevant to the welding seam, and pixels in the point cloud picture contain 3 data, namely XYZ three-dimensional coordinates. The three-dimensional point cloud data of the space points in the collected point cloud image can be converted into a two-dimensional matrix with depth information as an index. All the two-dimensional matrixes are arranged according to the spatial sequence to obtain a central matrix, and the average difference value of the depth information indexed by the central matrix and the depth information indexed by the surrounding matrixes is calculated. The average difference serves as a global threshold for depth information. And removing the three-dimensional point cloud with the depth information in the point cloud image being too far away from the global threshold value as a noise point, thereby improving the subsequent calculation efficiency and accuracy.
Step S3, local threshold value binarization processing is carried out on the amplitude image after preprocessing obtained in the step S2 to obtain a corresponding binarized image;
specifically, the threshold is obtained by calculating a local image Gaussian weighted average, and the preprocessed amplitude image is used for determining a binarization threshold by using a histogram method, so that a binarization image capable of reflecting the overall and local characteristics of the image is obtained.
S4, extracting the edge characteristics of the binary image through a Gabor filter, and acquiring the edge image of the welding seam;
specifically, step S4 specifically includes:
the principle of the Gabor filtering algorithm is as follows:
Figure BDA0003045996640000051
wherein x 'xcos θ + ysin θ, y' xsin θ + ycos θ, f is the center frequency, and θ is the selected direction;
the imaginary part of the Gabor filter function is expressed as formula (1), 4 scales are selected, f is 0.15, 0.3, 0.15 and 0.6 respectively, 6 directions are selected, theta is 0,
Figure BDA0003045996640000052
Pi and
Figure BDA0003045996640000053
constructing 24 filter banks;
Figure BDA0003045996640000054
wherein x is a Gaussian scale in the direction of the main shaft; y is a Gaussian scale orthogonal to the main shaft direction; f is expressed as the filter center frequency; θ represents the rotation angle of the gaussian main shaft; η and γ are constants, and in this embodiment, η is 1 and γ is 2.
S402, performing space domain convolution on the 24 filter banks obtained in the S401 and the binary image obtained in the S3 to obtain preliminary edge detection images with 4 scales and 6 directions;
specifically, a 3 × 3 convolution kernel is defined
Figure BDA0003045996640000055
In the originalAnd sliding the convolution kernel on the binary image, and performing summation operation until the convolution kernel slides over pixels of the whole image to obtain values output by all the pixels, so as to obtain primary edge detection images with different scales and different directions, wherein each image represents edge information of welding seams with different scales and different directions.
Step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, comparing two points near the corresponding image according to the detection direction, if the two points are the maximum value, reserving the two points, and if the two points are not the maximum value, changing the two points to 0;
and S404, fusing the preliminary edge detection images of 4 scales and 6 directions, and then performing edge connection on the fused images to obtain edge images of the welding seams.
Step S5, performing radon transformation on the edge image acquired in the step S4 to obtain a horizontally corrected edge image, and identifying a weld image based on the appearance condition of the weld;
specifically, the image subjected to edge feature extraction is subjected to Radon Transform (RT):
rotating the image by any theta angle (the rotation angle is between 0 and 180 degrees) by taking the center of the image as an origin to obtain a corresponding horizontal projection value r in rho-theta space; different R forms a projection set R, the maximum value of elements in the R is solved, and corresponding values theta and rho are solved, wherein theta is the horizontal rotation angle, and rho is the distance from the corresponding origin to the straight line;
then, converting theta and rho values obtained in the rho-theta space to a certain point Q through which the edge of the welding seam passes in an image plane coordinate system;
solving the position of the edge of the welding seam in the image coordinate plane according to a linear equation solving method;
and finally, integrally identifying the welding seam according to the prior knowledge of the appearance conditions (such as the width and the type) of the welding seam and the like.
S6, acquiring two-dimensional information of the welding seam through the welding seam image identified in the S5, and solving a three-dimensional coordinate of the welding seam by combining the depth image acquired in the S1;
specifically, since the acquisition of the weld image in the amplitude image has been identified in step S5, two-dimensional information of the weld can be directly acquired. Since the amplitude image and the depth image acquired by the ToF camera are directly related, the depth information of the weld joint can be acquired from the depth image. And combining the two-dimensional information and the depth information of the welding seam to obtain the three-dimensional coordinate information of the welding seam.
Step S7, constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
specifically, a world coordinate system is converted into a camera coordinate system through rigid body transformation; then, converting the camera coordinate system into an image coordinate system through perspective projection; and finally, discretizing the image coordinate system to obtain a pixel coordinate system.
More specifically:
world coordinate system (X)w,Yw,Zw) -a three-dimensional coordinate system in the real world describing the location of the object in the real world;
camera coordinate system (X)c,Yc,Zc) A three-dimensional rectangular coordinate system is established by taking the focusing center of the camera as an origin and taking the optical axis as Z;
image coordinate system (x, y) -to describe how the image in the camera coordinate system is projected onto the camera's negative;
pixel coordinates (u, v) -the image is composed of pixels, so the pixel coordinate system is used to determine the position of the pixel in the image.
As shown in fig. 1, rigid transformation is required to convert the world coordinate system to the camera coordinate system, and the rigid transformation is a transformation that only translates, rotates, and inverts an object without deforming the object.
The related transformation between the world coordinate system and the camera coordinate system can be completed only by performing rotation transformation and translation transformation.
The transformation of the world coordinate system to the camera coordinate system can be represented by a rotation matrix R and a translation matrix t:
Figure BDA0003045996640000061
expressed in homogeneous coordinate system as:
Figure BDA0003045996640000062
wherein [ r ]11,r12,r13]T,[r21,r22,r23]T,[r31,r32,r33]TBase vectors respectively representing original coordinate systems
Figure BDA0003045996640000063
Figure BDA0003045996640000064
tx,ty,tzIndicating the amount of translation in the x, y, z direction for the transformation to another coordinate system.
From the camera coordinate system to the image coordinate system, belonging to the projection perspective, i.e. from 3D to 2D, the schematic diagram is shown in fig. 2, where P is a point in space corresponding to P in the image coordinate system, whose coordinates are (x.y), according to the triangle-like principle:
Figure BDA0003045996640000071
Figure BDA0003045996640000072
expressed in homogeneous coordinates as:
Figure BDA0003045996640000073
where f denotes the focal length of the camera in fig. 2.
This step completes the conversion of the camera coordinate system to the ideal image coordinate system.
From the image coordinate system to the pixel coordinate system, the image coordinate system and the pixel coordinate system are on the same plane, but the origins of the two are different, so that a transformation is required, and the principle is as shown in fig. 3, where the transformation relationship between the pixel coordinate and the image coordinate is:
Figure BDA0003045996640000074
and (3) homogenizing to obtain:
Figure BDA0003045996640000075
the conversion matrix can be obtained by combining the four conversions:
Figure BDA0003045996640000076
in a world coordinate system, assuming that the position coordinate of one point on a welding seam is (x, y, z), combining a rotation matrix and a translation matrix, obtaining the coordinate of the point under the camera coordinate system based on the ToF technology through rigid body conversion, and then using a similar triangle principle, completing the conversion of the point on the welding seam in the three-dimensional camera image coordinate system based on the ToF technology.
And step S8, converting the three-dimensional coordinates of the welding seam acquired in the step S6 into space coordinates in the world coordinate system according to the conversion relation between the pixel coordinate system and the world coordinate system in the step S7.
In order to perform three-dimensional positioning on the weld joint, coordinate transformation is required. The position of the real-world weld obtained by the camera based on the ToF technology can be established with the corresponding relation with the pixel on the imaging plane of the ToF camera according to the conversion method.
Radon Transform (RT), which is a projection transform of the resulting digital image in various angular directions, is mathematically understood as a linear integral of a two-dimensional function f (x, y), and the resulting integral is projected onto the RT plane.
The integrated value obtained by linear projective transformation is also called Radon curve, which is determined by the distance ρ of the straight line in the image from the origin of the image coordinate system and the inclination angle θ of the straight line.
The digital image in the plane is linearly integrated along a straight line ρ ═ xcos θ + ysin θ, and F (θ, ρ) obtained by the linear integration is Radon transform of the digital image, that is, a certain point (θ, ρ) in the transform plane corresponds to a certain line integral value of the original image F (x, y). The Radon transform formula for a digital image f (x, y) is:
F(θ,ρ)=∫∫f(x,y)δ(ρ-xcosθ-ysinθ)dxdy
wherein:
Figure BDA0003045996640000081
f (x, y) is the pixel gray value of a certain point (x, y) on the image, delta is the Dirac function, and rho is the distance from the projection line to the origin in the (x, y) plane; theta is the angle between the normal of the projection line and the x-axis.
The characteristic function δ linearly integrates the image along a straight line ρ ═ xcos θ + ysin θ from the definition of RT, which can be seen as a linear projection of the digital image in the ρ - θ coordinate system, with each point in the coordinate system corresponding to each straight line in the image coordinate system; RT can also be seen as a linear projection on the horizontal axis of the image obtained after rotating the digital image clockwise by an angle theta.
So RT can be used for edge line detection in digital images: in the digital image coordinate system, a line with a high gray value will form a point with relatively high brightness in the rho-theta space, while a line with a low gray value will form a point with relatively dark brightness in the rho-theta space.
In this embodiment, fig. 4 is an original weld image obtained by using a camera based on the ToF technology in embodiment 1, and is processed by the method in this embodiment to obtain fig. 5, and fig. 5 is a point cloud image of a weld centerline finally obtained in embodiment 1.
The invention is not described in detail, but is well known to those skilled in the art.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (5)

1. A welding seam identification method based on ToF is characterized by comprising the following steps:
step S1, acquiring an original weld image of the weldment to be processed through a camera based on the ToF technology, wherein the original weld image comprises; an amplitude image and a depth image;
step S2, preprocessing the amplitude image obtained in the step S1 to obtain a preprocessed amplitude image;
step S3, local threshold value binarization processing is carried out on the amplitude image after preprocessing obtained in the step S2 to obtain a corresponding binarized image;
s4, extracting the edge characteristics of the binarized image through a Gabor filter, and acquiring an edge image of a welding seam;
step S5, performing radon transformation on the edge image acquired in the step S4 to obtain a horizontally corrected edge image, and identifying a weld image based on the appearance condition of the weld;
s6, acquiring two-dimensional information of the welding seam through the welding seam image identified in the S5, and solving a three-dimensional coordinate of the welding seam by combining the depth information acquired in the S1;
step S7, constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
and step S8, converting the three-dimensional coordinates of the welding seam acquired in the step S6 into space coordinates in the world coordinate system according to the conversion relation between the pixel coordinate system and the world coordinate system in the step S7.
2. The ToF-based weld joint identification method according to claim 1, wherein the sensors in the camera based on the ToF technology are in an array type.
3. The ToF-based weld joint identification method according to claim 2, wherein in the step S2, the preprocessing includes: the amplitude image is clipped to obtain an image containing a weld region, and the image is subjected to filtering processing.
4. The ToF-based weld joint identification method according to claim 3, wherein the step S4 specifically comprises:
s401, passing through the imaginary part of the Gabor filter function, wherein the expression is shown as formula (1), 4 scales are selected, f is 0.15, 0.3, 0.15 and 0.6 respectively, 6 directions are selected, and theta is 0,
Figure FDA0003045996630000011
Pi and
Figure FDA0003045996630000012
constructing 24 filter banks;
Figure FDA0003045996630000013
in the formula (1), x is a Gaussian scale in the main shaft direction; y is a gaussian scale orthogonal to the principal axis direction, f is a filter center frequency, θ is a rotation angle of the gaussian principal axis, η and γ are constants, and x ═ x cos θ + y sin θ, y ═ x sin θ + y cos θ.
S402, performing space domain convolution on the 24 filter banks obtained in the S401 and the binary image obtained in the S3 to obtain preliminary edge detection images with 4 scales and 6 directions;
step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, comparing two points near the corresponding image according to the detection direction, if the two points are the maximum value, reserving the two points, and if the two points are not the maximum value, changing the two points to 0;
and S404, fusing the preliminary edge detection images of 4 scales and 6 directions, and then performing edge connection on the fused images to obtain edge images of the welding seams.
5. The ToF-based weld joint identification method according to claim 4, wherein the step S7 specifically comprises:
firstly, converting a world coordinate system into a camera coordinate system through rigid body transformation;
then, converting the camera coordinate system into an image coordinate system through perspective projection;
and finally, discretizing the image coordinate system to obtain a pixel coordinate system.
CN202110472422.9A 2021-04-29 2021-04-29 Welding seam identification method based on ToF Pending CN113192029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110472422.9A CN113192029A (en) 2021-04-29 2021-04-29 Welding seam identification method based on ToF

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110472422.9A CN113192029A (en) 2021-04-29 2021-04-29 Welding seam identification method based on ToF

Publications (1)

Publication Number Publication Date
CN113192029A true CN113192029A (en) 2021-07-30

Family

ID=76980591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110472422.9A Pending CN113192029A (en) 2021-04-29 2021-04-29 Welding seam identification method based on ToF

Country Status (1)

Country Link
CN (1) CN113192029A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114453707A (en) * 2022-03-16 2022-05-10 南通大学 Multi-scene small-sized automatic welding robot based on ToF technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105665970A (en) * 2016-03-01 2016-06-15 中国科学院自动化研究所 System and method for automatic generation for path points of welding robot
CN109658456A (en) * 2018-10-29 2019-04-19 中国化学工程第六建设有限公司 Tank body inside fillet laser visual vision positioning method
CN111489436A (en) * 2020-04-03 2020-08-04 北京博清科技有限公司 Three-dimensional reconstruction method, device and equipment for weld joint and storage medium
CN112037189A (en) * 2020-08-27 2020-12-04 长安大学 Device and method for detecting geometric parameters of steel bar welding seam
CN112053376A (en) * 2020-09-07 2020-12-08 南京大学 Workpiece weld joint identification method based on depth information
CN112238304A (en) * 2019-07-18 2021-01-19 山东淄博环宇桥梁模板有限公司 Method for automatically welding small-batch customized special-shaped bridge steel templates by mechanical arm based on image visual recognition of welding seams
CN112308872A (en) * 2020-11-09 2021-02-02 西安工程大学 Image edge detection method based on multi-scale Gabor first-order derivative
CN112308873A (en) * 2020-11-09 2021-02-02 西安工程大学 Edge detection method for multi-scale Gabor wavelet PCA fusion image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105665970A (en) * 2016-03-01 2016-06-15 中国科学院自动化研究所 System and method for automatic generation for path points of welding robot
CN109658456A (en) * 2018-10-29 2019-04-19 中国化学工程第六建设有限公司 Tank body inside fillet laser visual vision positioning method
CN112238304A (en) * 2019-07-18 2021-01-19 山东淄博环宇桥梁模板有限公司 Method for automatically welding small-batch customized special-shaped bridge steel templates by mechanical arm based on image visual recognition of welding seams
CN111489436A (en) * 2020-04-03 2020-08-04 北京博清科技有限公司 Three-dimensional reconstruction method, device and equipment for weld joint and storage medium
CN112037189A (en) * 2020-08-27 2020-12-04 长安大学 Device and method for detecting geometric parameters of steel bar welding seam
CN112053376A (en) * 2020-09-07 2020-12-08 南京大学 Workpiece weld joint identification method based on depth information
CN112308872A (en) * 2020-11-09 2021-02-02 西安工程大学 Image edge detection method based on multi-scale Gabor first-order derivative
CN112308873A (en) * 2020-11-09 2021-02-02 西安工程大学 Edge detection method for multi-scale Gabor wavelet PCA fusion image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114453707A (en) * 2022-03-16 2022-05-10 南通大学 Multi-scene small-sized automatic welding robot based on ToF technology

Similar Documents

Publication Publication Date Title
Veľas et al. Calibration of rgb camera with velodyne lidar
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
US7376262B2 (en) Method of three dimensional positioning using feature matching
JP5618569B2 (en) Position and orientation estimation apparatus and method
Tran et al. Non-contact gap and flush measurement using monocular structured multi-line light vision for vehicle assembly
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
Boroson et al. 3D keypoint repeatability for heterogeneous multi-robot SLAM
Koide et al. General, single-shot, target-less, and automatic lidar-camera extrinsic calibration toolbox
CN113049184A (en) Method, device and storage medium for measuring mass center
Schraml et al. An event-driven stereo system for real-time 3-D 360 panoramic vision
CN113192029A (en) Welding seam identification method based on ToF
Park et al. Global map generation using LiDAR and stereo camera for initial positioning of mobile robot
Kim et al. An active trinocular vision system of sensing indoor navigation environment for mobile robots
Zhao et al. Extrinsic calibration of a small fov lidar and a camera
KR20230036651A (en) Object detection system and method using multi-coordinate system features of lidar data
Song et al. Underwater 3D reconstruction for underwater construction robot based on 2D multibeam imaging sonar
Hoang et al. Automatic calibration of camera and LRF based on morphological pattern and optimal angular back-projection error
Xin et al. Geometric interpretation of ellipse projection and disambiguating in pose estimation
Su Vanishing points in road recognition: A review
Li et al. Road edge and obstacle detection on the SmartGuard navigation system
Pirahansiah et al. Camera calibration for multi-modal robot vision based on image quality assessment
Chu et al. 3d perception and reconstruction system based on 2d laser scanner
Xie et al. Real-time reconstruction of unstructured scenes based on binocular vision depth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination