CN112381783A - Weld track extraction method based on red line laser - Google Patents

Weld track extraction method based on red line laser Download PDF

Info

Publication number
CN112381783A
CN112381783A CN202011256130.3A CN202011256130A CN112381783A CN 112381783 A CN112381783 A CN 112381783A CN 202011256130 A CN202011256130 A CN 202011256130A CN 112381783 A CN112381783 A CN 112381783A
Authority
CN
China
Prior art keywords
image
value
point
points
welding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011256130.3A
Other languages
Chinese (zh)
Other versions
CN112381783B (en
Inventor
孙炜
刘权利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202011256130.3A priority Critical patent/CN112381783B/en
Publication of CN112381783A publication Critical patent/CN112381783A/en
Application granted granted Critical
Publication of CN112381783B publication Critical patent/CN112381783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • B23K37/02Carriages for supporting the welding or cutting element
    • B23K37/0211Carriages for supporting the welding or cutting element travelling on a guide member, e.g. rail, track
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a weld track extraction method based on red line laser, which comprises the following steps: s1, moving a camera platform to a workpiece shooting area according to a built line laser camera platform, starting line laser, and shooting pictures; s2, reading a shot picture, performing channel separation to obtain an R channel image, then performing median filtering and Gaussian filtering to obtain a gray level histogram, obtaining a threshold value according to the gray level histogram, and then performing normalized threshold value processing to find a laser track of a line in the image; s3, thinning the image, fitting a straight line of two end points of the line laser track, obtaining a break point of the line laser track according to a point-to-straight line formula, and carrying out weighted average on foreground pixel points in a certain range of the break point to obtain a welding point; s4, drawing a straight line track in the initial image or the input image according to the welding point; and S5, transforming the welding points by using a transformation matrix obtained by calibrating the hand eyes, transforming the welding points in the image to a robot coordinate system, and finishing welding by the robot.

Description

Weld track extraction method based on red line laser
Technical Field
The invention belongs to the field of industrial application of welding, and particularly relates to a weld track extraction method based on red line laser.
Background
In recent years, with the continuous acceleration of the industrialization process of China, the market competition is increasingly fierce, and the upgrading of the manufacturing technology of the product becomes an important means for improving the product competitiveness of each enterprise. Welding is used as a mature material forming process, is widely applied in various manufacturing industries such as aerospace, rail vehicles, ship manufacturing and electromechanical equipment, and the like, and the number of industrial products produced by adopting welding as a material connecting means every year is huge. With the increase of demand, the welding task amount is more and more, and a plurality of welding robots are adopted to replace manual welding on the production flow line of the existing factory workshop.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the technical problems in the existing welding process, the invention provides the welding seam track extraction method based on the red laser, the welding seam recognition speed and the welding seam recognition precision can be realized, the operation speed is improved when the welding seam is accurately recognized, the welding speed and the welding precision of a welding robot in a workshop are improved, the reject ratio of products is reduced, the resource waste is reduced, the method is suitable for butt welding seams and fillet welding seams, and the extraction effect on straight welding seams is more obvious.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a weld track extraction method based on red laser, including the following steps:
(a) shooting by utilizing a built line laser camera platform; fixing the linear laser and the camera at an included angle of 30 degrees, fixing the linear laser and the camera together on the robot end effector, moving the camera platform to a workpiece shooting area, starting the linear laser and finely adjusting the camera platform, wherein the camera imaging area comprises a complete workpiece area; when the line laser strikes the initial point of the welding line, fixing the position, and then controlling a camera to take a picture to obtain a picture of the initial point of the welding line of the workpiece; and after the shooting of the starting point is finished, moving the camera platform to the welding seam terminal position, and then controlling the camera to shoot to obtain a workpiece welding seam terminal picture.
(b) And after the welding line picture is obtained, carrying out channel separation on the welding line picture, and reserving the R channel image after the separation. Since our line laser is red, the R channel is preserved for subsequent extraction of the weld. And performing Gaussian filtering and median filtering on the separated images. If the median filtering window is too large, image detail information can be lost, which is fatal to weld extraction, but the window is too small, the filtering effect is not good, and the satisfactory effect cannot be achieved. Therefore, the Gaussian filtering is added before the median filtering, and the median filtering is added after the Gaussian noise in the image is filtered, so that the noise in the image can be well filtered. Gaussian filtering is required to determine three parameters of a Gaussian convolution kernel, a standard deviation of the one-dimensional Gaussian convolution kernel in the horizontal direction and a standard deviation of the one-dimensional Gaussian convolution kernel in the vertical direction, and the formula of the Gaussian convolution kernel is as follows:
Figure BDA0002773162280000021
in the formula, σ of the G (x, y) function is standard deviation in the horizontal direction and the vertical direction, and here, the standard deviation in the horizontal direction and the vertical direction are made equal, so σ is in the formula2And experiments show that the Gaussian filter template selection 3 x 3 has good filtering effect.
(c) And obtaining the filtered image, displaying a gray histogram of the filtered image, observing the gray histogram, finding out a threshold of a laser track, and performing normalization threshold processing according to the obtained threshold, namely, the foreground gray value is 1 and the background gray value is 0 in the image subjected to the threshold processing, so that the foreground and background contrast of the image is enhanced, and the characteristics of the laser track are highlighted.
(d) Obtaining a binary image only containing a line laser track after the previous three steps of operation, performing image thinning operation on the binary image, further extracting the line laser track by using an improved thinning algorithm based on Zhang & Suen, traversing points on the line laser track by using an inter-point distance formula, wherein a break angle of a line laser on a workpiece is large, a threshold value of the break angle can be set in actual work, when the break angle is larger than the threshold value, two end points of the line laser track can be solved according to the maximum characteristic of the distance between the two end points, and then performing straight line fitting.
Because the line laser track obtained by the Zhang & Suen thinning algorithm is not single pixel, namely the thinning is not thorough, the improved thinning algorithm is provided based on the Zhang & Suen thinning principle.
The Zhang & Suen thinning algorithm is an iterative algorithm, and the iterative process is divided into two steps.
The first step is as follows: and traversing the foreground pixel points, and marking the pixel points meeting the following 4 conditions as deleted.
2≤N(P1)≤6 (2)
S(P1)=1 (3)
ValueP2*ValueP4*Value*P6=0 (4)
ValueP4*ValueP6*ValueP8=0 (5)
N (P1) in the formula (2) represents the number of foreground pixels in 8 elements adjacent to P1; s (P1) represents the number of times the image gradation value of the corresponding point appears from 0 to 1 during clockwise rotation from P2 to P9 to P2, where 0 represents the image background gradation value and 1 represents the image foreground gradation value; valueP2Representing the gray Value, of the image at the P2 positionP4Representing the gray Value, of the image at the P4 positionP6Shown at position P6Value ofP8Representing the image gray scale value at the P8 location; specific examples are as follows:
Figure BDA0002773162280000031
wherein N (P1) ═ 4, S (P1) ═ 3, ValueP2*ValueP4*ValueP6=0*0*0=0,ValueP4*ValueP6*ValueP80 × 1 × 0 does not satisfy the condition of formula (4) in the first step, and therefore is not marked as deletion.
The second step is that: and traversing the foreground pixel points, and marking the points meeting the following four conditions as deletion.
2≤N(P1)≤6 (6)
S(P1)=1 (7)
ValueP2*ValueP4*ValueP8=0 (8)
ValueP2*ValueP6*ValueP8=0 (9)
And deleting the points marked as deletion through the two steps, and then performing iterative loop until the points meeting the conditions are all deleted, wherein the obtained image is the refined image.
Because the refinement algorithm of Zhang & Suen has insufficient deletion condition constraint, the deletion point is not marked, and the point does not satisfy the constraint of S (P1) ═ 1 but is the deletion point. These points are classified into three categories, which are as follows:
(1) 4 cases of 2 target pixels in 8 neighborhood points are as follows:
Figure BDA0002773162280000041
4 points in the first class
(2)8 cases with 3 target pixels in the 8 neighborhoods are as follows:
Figure BDA0002773162280000042
8 kinds of dots in the second class
(3) 4 cases with 4 target pixels in the 8 neighborhoods are as follows:
Figure BDA0002773162280000043
4 kinds of dots in the third class
In order to find out the deletion points, the gray values of 8 neighborhood pixels with P1 as a central point are coded, the coding mode is that the gray values of the images of each point are arranged from left to right by taking P1 as a starting point and rotating clockwise; specific coding examples are as follows:
Figure BDA0002773162280000044
8 neighborhood pixel point coding mode
And coding the three 8-type neighborhood pixel points to obtain binary data of the three 8-type neighborhood pixel points, and converting the binary data into decimal. For example, the gray values of 9 images from P1 to P9 in the above figure are arranged to obtain 100011011, and are converted to the decimal number 283. The first 4 points are encoded to be 110000010, 110100000, 100101000 and 100001010 respectively, and the first conversion results are 386, 416, 296 and 266; the 8 points of the second type are 110110000, 110000110, 101101000, 100001011, 111000010, 100111010, 100101100 and 110101001 after being coded, and the conversion results of the second type are 432, 390, 360, 267, 450, 314, 300 and 425; the 4 points of the third category are encoded to be 110110001, 111000110, 101101100 and 100011011 respectively, and the conversion results of the third category are 433, 454, 364 and 283. In the traversal process, the 16 pixel points cannot be completely deleted, otherwise, a breakpoint phenomenon occurs. Through experiments, the effect of deleting the following pixel point sets is best, namely, the deleting point combinations are {386, 416, 296, 266, 432, 360, 300, 425, 433, 364 }.
The improved refinement algorithm comprises the following steps:
the first step is as follows: and traversing the foreground pixel points, and marking the points meeting the following 4 conditions as deletion points and deleting the deletion points.
2≤N(P1)≤6 (10)
S (P1) is 1 or B (P1) e {386, 416, 296, 266, 432, 360, 300, 425, 433, 364} (11)
ValueP2*ValueP4*ValueP6=0 (12)
ValueP4*ValueP6*ValueP8=0 (13)
In the formula, N (P1) represents the number of foreground pixels in 8 elements adjacent to P1, and the clockwise rotation points of adjacent points right above P1 are sequentially P1, P2, P3, P4, P5, P6, P7, P8 and P9, that is, the adjacent 8 elements; s (P1) represents the number of times that the image gradation value of the corresponding point appears from 0 to 1 during clockwise rotation from the point immediately above P1 to P2 to P9 to P2, where 0 represents the image background gradation value and 1 represents the image foreground gradation value; valueP2Representing the gray Value, of the image at the P2 positionP4Representing the gray Value, of the image at the P4 positionP6Representing the gray Value, of the image at the P6 positionP8Representing the image gray scale value at the P8 location; b (P1) is a binary sequence of pixel values of P1, P2, P3, P4, P5, P6, P7, P8, P9 converted to decimal values, namely:
Figure BDA0002773162280000061
the second step is that: and traversing the foreground pixel points, marking the points meeting the following 4 conditions as deletion points, and deleting the deletion points.
2≤N(P1)≤6 (15)
S (P1) is 1 or B (P1) e {386, 416, 296, 266, 432, 360, 300, 425, 433, 364} (16)
ValueP2*ValueP4*ValueP8=0 (17)
ValueP2*ValueP6*ValueP8=0 (18)
Iterative circulation is carried out through the two steps until all the points meeting the two-step condition are deleted, at the moment, the thinned image is a single-pixel image, the line laser track is a single-pixel track, and the fitted straight line is more accurate.
(e) And (d) calculating the distance from the point to the straight line of all the points with the pixel value of 1 in the image according to the straight line fitted in the step (d), and taking the point with the maximum distance, wherein the point is the break point. Because the error is increased by simply taking the break point as the welding point, the error is reduced by carrying out weighted average on the points in a range within a certain range of the break point, the diameter of the welding wire is generally 2mm at present and is about 6 pixel points, so that the coordinates of the points with the pixel value of 1 in the circle are stored by drawing a circle with the break point as the center of the circle and the radius of 3, and finally the coordinates of the welding point are calculated by utilizing a weighted average algorithm and are stored. The radius of 3 is only an example, and the value of the radius may be set as required.
(f) And extracting welding points from the initial image and the end image according to the method, constructing a straight line, namely a welding line trajectory line, according to the coordinates of the initial point and the end point, and drawing the welding line trajectory line in the initial image or the end image.
(g) According to the welding point obtained in the above step, since the welding point is located under the pixel coordinate system, we need to convert the welding point into the robot coordinate system to control the robot end effector to complete the welding task.
The conversion from the pixel coordinate system to the robot coordinate system requires two steps.
The first step is as follows: converting the pixel coordinate system into a world coordinate system; the conversion formula is as follows:
Figure BDA0002773162280000062
in the formula (19), the compound represented by the formula (I),
Figure BDA0002773162280000063
is a coordinate point in a world coordinate system, R is a rotation matrix of a camera coordinate system relative to the world coordinate system, M is an internal reference matrix of the camera, s is a value of the world coordinate point in a Z direction of the camera coordinate system,
Figure BDA0002773162280000071
is a coordinate point under the pixel coordinate system, and T is a translation matrix of the camera coordinate system relative to the world coordinate system.
The second step is that: converting the world coordinate system into a robot coordinate system; the conversion formula is as follows:
Figure BDA0002773162280000072
in the formula (20), ProbotExpressed as coordinate points in the robot coordinate system,
Figure BDA0002773162280000073
expressed as a transformation matrix of the robot end effector coordinate system to the robot base coordinate system,
Figure BDA0002773162280000074
expressed as a transformation matrix from the camera coordinate system to the robot end effector coordinate system, which transformation matrix is obtained by the hand-eye calibration of the robot,
Figure BDA0002773162280000075
expressed as a transformation matrix from the world coordinate system to the camera coordinate system, PworldRepresented as coordinate points in the world coordinate system. And converting coordinate points, namely welding points, in the pixel coordinate system into points in the robot coordinate system through the two steps, and then controlling the robot to complete welding.
As a preferred embodiment, in the step a, a camera platform needs to be built, the relative pose of the camera and the line laser is determined, then the camera platform is controlled to move to the start point position of the welding seam of the workpiece to take a picture, then the camera platform is controlled to move to the end point position of the workpiece to take a picture, and the size of the picture is 1280x 960.
As a preferred embodiment, in step b, since the laser is red, the R channel is reserved when image channel separation is performed, and the track information of the line laser on the workpiece is enhanced. Because the B, G, R channel is separated only by the channel separation operation, and noise in the image cannot be filtered, the noise needs to be filtered out first to eliminate interference when the weld joint is extracted. Experiments show that the median filtering is directly adopted, different filtering windows are set, and the filtering effect cannot achieve satisfactory effect, so that a filtering mode combining Gaussian filtering and median filtering is adopted, the Gaussian filtering operation with 5 × 5 kernel is firstly carried out, and then the median filtering processing with 5 × 5 window size is carried out.
As a preferred embodiment, in step c, we perform binarization processing on the image by setting a reasonable threshold, but because the processing result does not achieve the desired effect when adaptive threshold processing is used, we perform histogram processing on the image to obtain an accurate segmentation threshold, and then use a normalized threshold processing algorithm, that is, the foreground of the image after threshold processing is 1 and the background is 0, and after filtering and normalized threshold processing, only the linear laser trajectory remains in the image.
As a preferred embodiment, in the step d, we need to perform thinning operation on the binarized image, because the line laser track obtained after filtering and threshold processing is very rough, we need to perform thinning processing on the laser track to obtain a single-pixel laser track in order to obtain a precise welding point, and the thinning algorithm of Zhang & Suen has the advantages of fast thinning, maintaining the connectivity of the thinned curve, avoiding burr generation and the like, but it cannot guarantee that the obtained curve is a single-pixel curve after thinning, the thinning is not thorough, and the thinning algorithm is optimized on the basis of the Zhang & Suen algorithm to generate a single-pixel curve, namely the line laser track.
As a preferred embodiment, after the step d, a binary image containing only a single pixel line laser track is obtained, coordinate values of two end points of the laser track are obtained, then a straight line is fitted, the distance from a point on the single pixel line laser track to the straight line is obtained according to a point-to-straight line distance formula, the point with the largest distance is the welding point, after the welding point is found, the welding point is stored, after the welding point and the end point welding point are both obtained, the welding point is stored, then the coordinates are converted into coordinates in a robot coordinate system, and the robot motion is controlled.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the technical scheme of the invention increases the extraction speed of the welding line, reduces the complexity and improves the speed of welding line identification. The traditional welding seam recognition is very difficult to process an image to obtain a welding seam by directly shooting with a camera due to the industrial field environment, and because the welding seam features are not prominent in the workpiece environment, the time consumption and the labor consumption of direct extraction are high. Therefore, in order to accelerate the extraction speed and the extraction precision of the welding seam, red line laser is carried beside the camera, characteristics are added to the welding seam, the image processing flow is simplified, and the processing speed is increased. The invention can conveniently extract the welding line, does not increase the hardware cost because of the low price of the line laser, and has strong adaptability to the industrial environment because of the advantages of high brightness, monochromaticity, good coherence and the like of the laser. In practical application, the relative pose of the camera and the line laser is fixed, the camera is moved to the position where the line laser just hits the starting point and the end point of the workpiece, then the camera shoots and processes the shot image to obtain the welding seam track, in practical application, the camera is not fixed and can move according to the position of the workpiece, the whole system is flexible to control, the welding seam detection real-time performance is high, and the industrial field environment can be well adapted.
Drawings
FIG. 1 is a schematic representation of a weld of a workpiece for weld extraction according to an embodiment of the present invention;
FIG. 2 is a flowchart of an algorithm for weld extraction according to an embodiment of the present invention;
FIG. 3 is a system block diagram of weld extraction according to an embodiment of the present invention;
FIG. 4 is an input image of a weld extraction of an embodiment of the present invention;
FIG. 5 is an image after separation of a channel in an embodiment of the invention;
FIG. 6 is an image after normalized thresholding in an embodiment of the invention;
FIG. 7 is an image after refinement in an embodiment of the invention;
FIG. 8 is an image after fitting a straight line in an embodiment of the present invention;
FIG. 9 is an image of a weld detected in an embodiment of the present invention.
Detailed Description
The following describes an embodiment of the present invention in detail with reference to the drawings of the specification, in which an industrial application scenario of fillet weld extraction is taken as an embodiment, the sensors used in the embodiment are a red line laser sensor and an amazing industrial camera, a schematic composition diagram of the system is shown in fig. 3, the scenario is merely an illustration of a specific embodiment, and is not a limitation of the present invention, and it is within the scope of the present invention that the method is used in other industrial production scenarios.
Example (b):
the present embodiment is explained by using a work piece to be welded in a fillet weld, and a schematic diagram thereof is shown in fig. 1, where a blue line AB marked in the drawing is a weld:
as shown in fig. 2, a weld trace extraction method based on red line laser includes the following steps:
(a) the method comprises the steps of utilizing a built line laser camera platform to shoot, moving the camera platform to a workpiece shooting area, enabling a camera imaging area to contain a finished workpiece area, starting line laser, and shooting a welding seam initial position and an end position irradiated by the line laser.
(b) And after the welding line picture is obtained, performing channel separation on the RGB image, and reserving an R channel. And performing Gaussian filtering and median filtering on the separated images. And before the median filtering is performed, the Gaussian filtering is added, and the median filtering is added after the Gaussian noise in the image is filtered, so that the noise in the image can be well filtered.
(c) And obtaining the filtered image, displaying a gray level histogram of the filtered image to obtain a reasonable threshold value for thresholding, and performing normalization threshold value processing according to the obtained threshold value, namely, the foreground value is 1 and the background value is 0 in the thresholded image, so that the foreground and background contrast of the image is enhanced, and the weld joint point characteristics are highlighted.
(d) Obtaining a binary image only containing a line laser track after the three steps of operations, carrying out image thinning operation on the binary image, extracting the central line of the line laser track by using an improved thinning algorithm based on Zhang & Suen, and then obtaining a starting point and an end point of the track containing the single-pixel central line to carry out straight line fitting.
(e) And calculating the point-to-straight line distance of all points with the pixel value of 1 in the image according to the obtained straight line, taking the point with the largest distance as a folding point, then carrying out weighted average on foreground pixel points in a certain range of the folding point to obtain the coordinates of the welding point, and storing the welding point data.
(f) And extracting welding points from the initial image and the end image, drawing a welding line track line in the initial image or the end image, sending the initial and end welding point data to the robot to control the robot to move, and calculating errors.
In the step a, the camera platform needs to be moved, if the workpiece is placed on the conveyor belt and the conveyor belt can be moved, the workpiece is located in a shooting area of the camera, at this time, the linear laser is started to irradiate the starting point of the welding seam of the workpiece, then, shooting is carried out to obtain the starting point image of the welding seam of the workpiece, then, shooting is carried out according to the steps to obtain the end point image of the welding seam of the workpiece, and the starting point image obtained by shooting is shown as the following figure 4.
In the step b, channel separation is carried out on the shot image, and as the line laser is red, the retention of the gray image of the R channel can facilitate the extraction of the welding seam. The image after separating the R channel is shown in fig. 5 below. The method comprises the steps of filtering a separation channel image, adding Gaussian filtering before adopting median filtering, improving the filtering effect, simplifying the processing steps for subsequent image processing, and if the median filtering or the Gaussian filtering is simply adopted, firstly, the filtering effect is poor, the satisfactory requirement cannot be met, and secondly, the filtering parameter cannot be adjusted to the satisfactory effect easily, so that the two filtering methods are combined, the filtering parameter is set, and the satisfactory filtering effect is achieved.
In the step c, threshold processing is performed, the image obtained by processing with the adaptive threshold processing algorithm carries background information, the adaptive threshold processing has a good effect on the threshold of the whole image, and the effect is not good for a scene which only needs single track information, so that the gray value distribution of the gray histogram generated in the application scene is observed firstly, then a reasonable threshold is selected according to the gray histogram, then threshold processing is performed on the image, then normalization processing is performed on the image to obtain the binary image of the linear laser track, the effect of the binary image generated by observation is good, no noise point influence exists, and the image after normalization threshold processing is as shown in fig. 6.
In the step d, thinning operation is carried out on the obtained binary image, the center track of the line laser is found, and then the welding point is found. In actual thinning, the effect of directly using the thinning algorithm of Zhang & Suen in our image is not good, the generated line laser track is not a single-pixel track, in order to more accurately obtain our welding point, a binary image containing the line laser track with good effect is obtained by improving the thinning algorithm of Zhang & Suen, and the image after thinning operation is shown in fig. 7.
In the step e, a fitted straight line is obtained firstly by using an Euclidean distance formula and a straight line fitting mathematical method, then a point with the maximum distance from a line laser track to the fitted straight line is obtained by using a point-to-straight line distance formula, the point is a folding point, and in order to reduce errors, foreground pixel points in a certain range of the folding point are required to be weighted and averaged to obtain welding point coordinates. The fitted straight line is shown in fig. 8 below. By solving the welding point for the two images and then storing the two images, in order to observe whether the solving of the welding point is correct or not, after the welding point is obtained, the welding line extracted by people is depicted by using a blue line, and the position of the extracted welding line is basically superposed with the position of the actual welding line through comparison in the images, wherein the specific image is shown in the following figure 9.

Claims (5)

1. A welding seam track extraction method based on red laser is characterized by comprising the following steps:
(a) the method comprises the steps of taking a picture by using a built line laser camera platform, moving the camera platform to a workpiece shooting area, enabling a camera imaging area to contain a finished workpiece area, starting line laser, adjusting the camera platform, fixing the camera platform when the line laser hits an initial point of a welding seam, and controlling the camera to take a picture to obtain an initial point picture of the welding seam of the workpiece; after the initial point shooting is finished, moving a camera platform to the welding seam terminal point position, and then controlling a camera to shoot to obtain a workpiece welding seam terminal point picture;
(b) after a welding line picture is obtained, channel separation is carried out on the RGB image, an R channel is reserved so as to be convenient for subsequently extracting a welding line, and Gaussian filtering and median filtering are carried out on the separated image so as to filter noise in the image;
(c) obtaining the filtered image, displaying a gray level histogram of the filtered image to obtain a reasonable threshold value for thresholding, and performing normalization threshold value processing according to the obtained threshold value, namely, the foreground value is 1 and the background value is 0 in the thresholded image, so that the foreground and background contrast of the image is enhanced, and the characteristics of a laser line are highlighted;
(d) obtaining a binary image only containing a line laser track after the three steps of operation, carrying out image thinning operation on the binary image, extracting a central line of the line laser track by using an improved thinning algorithm based on Zhang & Suen, and then obtaining two end points containing a single-pixel central line track to carry out straight line fitting;
(e) calculating the distance from the point to the straight line of all the points with the pixel value of 1 in the image according to the obtained straight line, taking the point with the largest distance as a folding point, then carrying out weighted average on the coordinates of foreground pixel points in a certain range of the folding point to obtain the position coordinates of the welding point, and storing the coordinates of the welding point;
(f) and extracting welding points from the initial image and the end image according to the method, constructing a straight line, namely a welding line trajectory line, according to the coordinates of the initial point and the end point, and drawing the welding line trajectory line in the initial image or the end image.
2. The weld bead track extraction method based on the red laser as claimed in claim 1, further comprising the following steps after the step (f): and sending the initial and final welding point data to the robot to control the robot to move for welding.
3. The weld bead track extraction method based on the red laser as claimed in claim 1 or 2, wherein in the step (d), the step of extracting the center line of the line laser track by the improved refinement algorithm of Zhang & Suen is as follows:
the first step is as follows: traversing foreground pixel points, marking the points meeting the following 4 conditions as deletion points and deleting the deletion points;
2≤N(P1)≤6
s (P1) is 1 or B (P1) e {386, 416, 296, 266, 432, 360, 300, 425, 433, 364}
ValueP2*ValueP4*ValueP6=0
ValueP4*ValueP6*ValueP8=0
In the formula, N (P1) represents the number of foreground pixels in 8 elements adjacent to P1, and the clockwise rotation points of adjacent points right above P1 are sequentially P1, P2, P3, P4, P5, P6, P7, P8 and P9, that is, the adjacent 8 elements; s (P1) represents the number of times that the image gradation value of the corresponding point appears from 0 to 1 during clockwise rotation from the point immediately above P1 to P2 to P9 to P2, where 0 represents the image background gradation value and 1 represents the image foreground gradation value; valueP2Representing the gray Value, of the image at the P2 positionP4Representing the gray Value, of the image at the P4 positionP6Representing the gray Value, of the image at the P6 positionP8Representing the image gray scale value at the P8 location; b (P1) is a binary sequence of pixel values of P1, P2, P3, P4, P5, P6, P7, P8, P9 converted to decimal values,namely:
Figure FDA0002773162270000021
the second step is that: traversing foreground pixel points, marking the points meeting the following 4 conditions as deletion points and deleting the deletion points;
2≤N(P1)≤6
s (P1) is 1 or B (P1) e {386, 416, 296, 266, 432, 360, 300, 425, 433, 364}
ValueP2*ValueP4*ValueP8=0
ValueP2*ValueP6*ValueP8=0
And carrying out iterative circulation through the two steps until all the points meeting the two-step condition are deleted, wherein the thinned image is a single-pixel image, and the line laser track is a single-pixel track.
4. The weld trace extraction method based on the red laser according to claim 1 or 2, wherein the specific method of the step (e) is as follows: and (d) calculating the distance from the point to the straight line of all the points with the pixel value of 1 in the image according to the straight line fitted in the step (d), taking the point with the largest distance, namely the break point, drawing a circle with the break point as the center of the circle and the radius of R, storing the coordinates of the points with the pixel value of 1 in the circle, summing the coordinates of the points with the pixel value of 1 in the circle, and then averaging to calculate the coordinates of the final welding point.
5. The method for extracting the weld track based on the red laser according to the claim 1 or 2, characterized in that the data of the initial and the final welding points are sent to a robot to control the robot to move for welding, and the operation is converted from a pixel coordinate system to a robot coordinate system, and the following operations are carried out:
the first step is as follows: the pixel coordinate system is converted into a world coordinate system, and the conversion formula is as follows:
Figure FDA0002773162270000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002773162270000032
is a coordinate point in a world coordinate system, R is a rotation matrix of a camera coordinate system relative to the world coordinate system, M is an internal reference matrix of the camera, s is a value of the world coordinate point in a Z direction of the camera coordinate system,
Figure FDA0002773162270000033
a coordinate point under a pixel coordinate system is defined, and T is a translation matrix of a camera coordinate system relative to a world coordinate system;
the second step is that: converting the world coordinate system into a robot coordinate system, wherein the conversion formula is as follows:
Figure FDA0002773162270000034
in the formula, ProbotExpressed as coordinate points in the robot coordinate system,
Figure FDA0002773162270000035
expressed as a transformation matrix of the robot end effector coordinate system to the robot base coordinate system,
Figure FDA0002773162270000036
expressed as a transformation matrix of the camera coordinate system to the robot end effector coordinate system, which transformation matrix is obtained by hand-eye calibration of the robot,
Figure FDA0002773162270000037
expressed as a transformation matrix from the world coordinate system to the camera coordinate system, PworldRepresented as coordinate points in the world coordinate system.
CN202011256130.3A 2020-11-11 2020-11-11 Weld track extraction method based on red line laser Active CN112381783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011256130.3A CN112381783B (en) 2020-11-11 2020-11-11 Weld track extraction method based on red line laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011256130.3A CN112381783B (en) 2020-11-11 2020-11-11 Weld track extraction method based on red line laser

Publications (2)

Publication Number Publication Date
CN112381783A true CN112381783A (en) 2021-02-19
CN112381783B CN112381783B (en) 2022-10-11

Family

ID=74582795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011256130.3A Active CN112381783B (en) 2020-11-11 2020-11-11 Weld track extraction method based on red line laser

Country Status (1)

Country Link
CN (1) CN112381783B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706510A (en) * 2021-08-31 2021-11-26 杭州师范大学钱江学院 Welding seam detection positioning method based on gray value mutation point interpolation line segment fitting
CN117444404A (en) * 2023-11-20 2024-01-26 北京绿能环宇低碳科技有限公司 Intelligent positioning method and system for laser welding
CN117745719A (en) * 2024-02-19 2024-03-22 常熟理工学院 Extraction method of robot weld milling track

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678776A (en) * 2016-01-11 2016-06-15 南京工业大学 Weld image feature point extraction method based on laser vision sensor
US20170345157A1 (en) * 2016-05-31 2017-11-30 Servo-Robot Inc. Process tracking laser camera with non-eye-safe and eye-safe operating modes
CN107876970A (en) * 2017-12-13 2018-04-06 浙江工业大学 A kind of robot multi-pass welding welding seam three-dimensional values and weld seam inflection point identification method
CN108592823A (en) * 2017-12-04 2018-09-28 湖南大学 A kind of coding/decoding method based on binocular vision color fringe coding
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN111523622A (en) * 2020-04-26 2020-08-11 重庆邮电大学 Method for simulating handwriting by mechanical arm based on characteristic image self-learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678776A (en) * 2016-01-11 2016-06-15 南京工业大学 Weld image feature point extraction method based on laser vision sensor
US20170345157A1 (en) * 2016-05-31 2017-11-30 Servo-Robot Inc. Process tracking laser camera with non-eye-safe and eye-safe operating modes
CN108592823A (en) * 2017-12-04 2018-09-28 湖南大学 A kind of coding/decoding method based on binocular vision color fringe coding
CN107876970A (en) * 2017-12-13 2018-04-06 浙江工业大学 A kind of robot multi-pass welding welding seam three-dimensional values and weld seam inflection point identification method
CN110245599A (en) * 2019-06-10 2019-09-17 深圳市超准视觉科技有限公司 A kind of intelligent three-dimensional weld seam Auto-searching track method
CN111523622A (en) * 2020-04-26 2020-08-11 重庆邮电大学 Method for simulating handwriting by mechanical arm based on characteristic image self-learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHUNGKWONG CHAN 等: "Stroke Extraction for Offline Handwritten Mathematical Expression Recognition", 《IEEE ACCESS》 *
HENAN YUAN 等: "Line laser point cloud segmentation based on the combination of", 《2020 39TH CHINESE CONTROL CONFERENCE (CCC)》 *
张文明 等: "一种线激光V形焊缝中心点识别的改进算法", 《生产应用》 *
李松阳: "基于图像传感技术的焊缝定位检测系统的研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅰ辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706510A (en) * 2021-08-31 2021-11-26 杭州师范大学钱江学院 Welding seam detection positioning method based on gray value mutation point interpolation line segment fitting
CN113706510B (en) * 2021-08-31 2023-07-28 杭州师范大学钱江学院 Weld joint detection positioning method based on gray value mutation point interpolation line segment fitting
CN117444404A (en) * 2023-11-20 2024-01-26 北京绿能环宇低碳科技有限公司 Intelligent positioning method and system for laser welding
CN117444404B (en) * 2023-11-20 2024-03-29 北京绿能环宇低碳科技有限公司 Intelligent positioning method and system for laser welding
CN117745719A (en) * 2024-02-19 2024-03-22 常熟理工学院 Extraction method of robot weld milling track
CN117745719B (en) * 2024-02-19 2024-04-26 常熟理工学院 Extraction method of robot weld milling track

Also Published As

Publication number Publication date
CN112381783B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN112381783B (en) Weld track extraction method based on red line laser
CN114821114B (en) Groove cutting robot image processing method based on vision system
CN107610111B (en) deep learning-based welding spot image detection method
CN109903279B (en) Automatic teaching method and device for welding seam movement track
CN110660104A (en) Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN110315525A (en) A kind of robot workpiece grabbing method of view-based access control model guidance
CN111079545A (en) Three-dimensional target detection method and system based on image restoration
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
CN108907526A (en) A kind of weld image characteristic recognition method with high robust
CN111604909A (en) Visual system of four-axis industrial stacking robot
CN111784655B (en) Underwater robot recycling and positioning method
CN111179233B (en) Self-adaptive deviation rectifying method based on laser cutting of two-dimensional parts
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN111721259A (en) Underwater robot recovery positioning method based on binocular vision
CN112560704B (en) Visual identification method and system for multi-feature fusion
CN114049557A (en) Garbage sorting robot visual identification method based on deep learning
CN115830018B (en) Carbon block detection method and system based on deep learning and binocular vision
CN107014291A (en) A kind of vision positioning method of the accurate transfer platform of material
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN108182700B (en) Image registration method based on two-time feature detection
Pei et al. Welding component identification and solder joint inspection of automobile door panel based on machine vision
CN116594351A (en) Numerical control machining unit system based on machine vision
CN116188763A (en) Method for measuring carton identification positioning and placement angle based on YOLOv5
CN114882108A (en) Method for estimating grabbing pose of automobile engine cover under two-dimensional image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant