CN115990896A - Welding robot control system based on image recognition - Google Patents

Welding robot control system based on image recognition Download PDF

Info

Publication number
CN115990896A
CN115990896A CN202310117201.9A CN202310117201A CN115990896A CN 115990896 A CN115990896 A CN 115990896A CN 202310117201 A CN202310117201 A CN 202310117201A CN 115990896 A CN115990896 A CN 115990896A
Authority
CN
China
Prior art keywords
detection window
image
coordinates
welding
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310117201.9A
Other languages
Chinese (zh)
Other versions
CN115990896B (en
Inventor
武慧红
黄志佳
汪渊博
杨丹
冯燕萍
陈华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Aosang Machinery Equipment Co ltd
Original Assignee
Zhejiang Aosang Machinery Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Aosang Machinery Equipment Co ltd filed Critical Zhejiang Aosang Machinery Equipment Co ltd
Priority to CN202310117201.9A priority Critical patent/CN115990896B/en
Publication of CN115990896A publication Critical patent/CN115990896A/en
Application granted granted Critical
Publication of CN115990896B publication Critical patent/CN115990896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the field of control and discloses a welding robot control system based on image recognition, which comprises an image processing module, a calculation module and a control module; the image processing module is used for calculating the linear laser image and acquiring the coordinates of the welding line; the calculation module is used for calculating the target coordinates of the welding head according to the coordinates of the welding seam; the control module is used for moving the welding head to the target coordinates. In the process of processing the linear laser image, the invention can obtain the weld joint coordinates by only carrying out graying operation, filtering operation and window detection operation, and compared with the prior art, the invention reduces the steps of binarization processing, morphological processing, thinning processing and the like, thereby improving the acquisition speed of the weld joint coordinates and further improving the welding efficiency of the welding robot.

Description

Welding robot control system based on image recognition
Technical Field
The invention relates to the field of control, in particular to a welding robot control system based on image recognition.
Background
The welding robot needs to acquire the position of the welding line in real time in the welding process, so that the position of the welding joint can be conveniently adjusted in real time, and automatic welding is realized. In the prior art, the welding head can automatically move along with the position of the welding seam in a control mode, so that automatic welding is realized.
However, in the existing welding robot processing the linear laser image, after the gray-scale processing and the noise reduction processing, the steps of binarization processing, morphological processing, thinning processing and the like are needed to obtain the coordinates of the welding seam, and the steps are relatively more, so that the time consumption is relatively long, and the welding efficiency of the welding robot is affected.
Disclosure of Invention
The invention aims to disclose a welding robot control system based on image recognition, which solves the problems that the welding efficiency of the welding robot is affected due to longer time consumption because a plurality of steps for processing a line laser image are needed in the welding process of the welding seam of the conventional welding robot.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a welding robot control system based on image recognition comprises an image processing module, a calculating module and a control module;
the image processing module is used for calculating the linear laser image and acquiring the coordinates of the welding line;
the calculation module is used for calculating the target coordinates of the welding head according to the coordinates of the welding seam;
the control module is used for moving the welding head to the target coordinates;
the method for calculating the linear laser image to obtain the coordinates of the welding line comprises the following steps:
s1, carrying out graying treatment on a linear laser image to obtain a gray image, and establishing a rectangular coordinate system by taking the lower left corner of the gray image as an origin of coordinates;
s2, detecting pixel points positioned in the center of a detection window by using a 1X vrn detection window in a gray level image, wherein vrn represents the number of rows of the detection window, vrn is an odd integer, and 1 represents the number of columns of the detection window;
s3, judging whether the detection window meets preset detection conditions, if so, entering S4, and if not, entering S6;
s4, adding 1 to the abscissa of the center of the detection window in the S2 to obtain the abscissa of the center of the new detection window, entering S7 if the abscissa of the center of the new detection window is larger than the number of columns of the gray level images, and entering S5 if the abscissa of the center of the new detection window is smaller than or equal to the number of columns of the gray level images;
s5, acquiring the ordinate of the new detection window center according to a preset rule, so as to obtain the new detection window center, and entering S2;
s6, subtracting 1 from the ordinate of the center of the detection window in the S2 to obtain a new center of the detection window, and entering the S2;
s7, respectively marking the pixel points at the centers of 3 detection windows with continuous abscissa and meeting preset detection conditions as px 1 、px 2 、px 3
S8, calculating px 1 、px 2 Slope k between 1 Calculating px 2 、px 3 Slope k between 2 If |k 1 -k 2 I is less than the set slope threshold and k 1 Is negative, k 2 Positive value, px 2 As a change point, the coordinates of the change point are denoted as (x chg ,y chg );
S9, calculating the ordinate y of the welding line aim
Figure BDA0004078988390000021
The coordinates of the weld are (x) chg ,y aim )。
Preferably, the device also comprises a transmitting module,
the emission module is used for emitting line laser to the welding seam.
Preferably, the device also comprises an imaging module,
the imaging module is used for imaging the line laser irradiated on the welding seam to obtain a line laser image.
Preferably, the calculating the target coordinates of the welding head according to the coordinates of the welding seam includes:
and converting the coordinates of the welding line from the image coordinate system to the world coordinate system to obtain the target coordinates.
Preferably, the graying processing is performed on the line laser image to obtain a gray image, including:
the graying function is as follows:
g(x,y)=w 1 ×r(x,y)+w 2 ×g(x,y)+w 3 ×b(x,y)
where g (x, y) is the gray value, w, of the pixel point at the coordinates (x, y) of the gray image g 1 、w 2 、w 3 The weights of r (x, y), g (x, y), and b (x, y) are respectively expressed, r (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a red component image in an RGB color space, g (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a green component image in the RGB color space, and b (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a blue component image in the RGB color space.
Preferably, determining whether the detection window meets a preset detection condition includes:
s31, respectively carrying out filtering treatment on each pixel point in the detection window to obtain a filtered pixel point set D;
s32, calculating a minimum value miy and a maximum value mayof the ordinate of the pixel points with abrupt pixel values in the set D;
s33, acquiring the number nbgt of pixel points with pixel values larger than a set pixel value threshold in the set D;
s34, calculating a detection coefficient:
Figure BDA0004078988390000031
wherein dtvcef is a detection coefficient, ysthr is a preset vertical coordinate difference value, fcthr is a set variance comparison value, bgt is a set of pixel points in the set D, the pixel value of which is greater than a set pixel value threshold value, g i The gray value of the pixel point i in bgt; alpha is the weight of the gray value difference, beta is the weight of the number, and eta is the weight of the variance;
and S35, comparing the detection coefficient with a set coefficient threshold, if the detection coefficient is larger than the set coefficient threshold, conforming the detection window to a preset detection condition, and if the detection coefficient is smaller than or equal to the set coefficient threshold, not conforming the detection window to the preset detection condition.
Preferably, acquiring the ordinate of the center of the new detection window according to a preset rule includes:
let d denote the ordinate of the detection window center in S2, the ordinate d' of the new detection window center is:
Figure BDA0004078988390000032
wherein θ represents an adaptive range parameter,
Figure BDA0004078988390000033
is rounded upward.
Preferably, the calculation function of θ is:
Figure BDA0004078988390000034
where ns is the width of the line laser left on the plane when the line laser is emitted vertically at the height Q, E is the maximum effective width of the line laser, and bsn is a constant.
In the process of processing the linear laser image, the invention can obtain the weld joint coordinates by only carrying out graying operation, filtering operation and window detection operation, and compared with the prior art, the invention reduces the steps of binarization processing, morphological processing, thinning processing and the like, thereby improving the acquisition speed of the weld joint coordinates and further improving the welding efficiency of the welding robot.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a welding robot control system based on image recognition according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In one embodiment as shown in fig. 1, the present invention provides a welding robot control system based on image recognition, which includes an image processing module 101, a calculating module 201, and a control module 301.
The image processing module 101 is used for calculating the line laser image and acquiring the coordinates of the welding line.
The calculation module 201 is configured to calculate target coordinates of the welding head according to coordinates of the welding seam.
In one embodiment, calculating target coordinates of the weld joint from coordinates of the weld joint includes:
and converting the coordinates of the welding line from the image coordinate system to the world coordinate system to obtain the target coordinates.
The control module 301 is used for moving the welding head to the target coordinates;
the method for calculating the linear laser image to obtain the coordinates of the welding line comprises the following steps:
s1, carrying out graying treatment on the linear laser image to obtain a gray image, and establishing a rectangular coordinate system by taking the lower left corner of the gray image as the origin of coordinates.
In one embodiment, graying a line laser image to obtain a gray scale image includes:
the graying function is as follows:
g(x,y)=w 1 ×r(x,y)+w 2 ×g(x,y)+w 3 ×b(x,y)
where g (x, y) is the gray value, w, of the pixel point at the coordinates (x, y) of the gray image g 1 、w 2 、w 3 The weights of r (x, y), g (x, y), and b (x, y) are respectively expressed, r (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a red component image in an RGB color space, g (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a green component image in the RGB color space, and b (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a blue component image in the RGB color space.
After graying, the number of color channels to be calculated can be reduced, thereby speeding up the operation.
S2, detecting a pixel point positioned at the center of a detection window by using the detection window with the size of 1X vrn in the gray level image, wherein vrn represents the number of rows of the detection window, vrn is an odd integer, and 1 represents the number of columns of the detection window.
In the present invention, the detection is performed in a top-to-bottom, left-to-right order. However, the detection window is not used for detecting all pixel points, but the region with the welding seam is detected in the first column, and then the region is extended to the right continuously based on the region, so that a complete welding seam region is obtained. Compared with the existing detection mode, the method does not need to carry out the calculation processes such as binarization and the like, and the calculation efficiency is higher.
S3, judging whether the detection window meets preset detection conditions, if so, entering S4, and if not, entering S6.
In the process that the center of the detection window changes from top to bottom, the pixel points in the detection window are continuously updated, and when most of the pixel points belonging to the line laser are in the detection window, the pixel points can be accurately identified due to the coincidence of detection conditions.
In one embodiment, determining whether the detection window meets a preset detection condition includes:
s31, respectively carrying out filtering treatment on each pixel point in the detection window to obtain a filtered pixel point set D;
s32, calculating a minimum value miy and a maximum value mayof the ordinate of the pixel points with abrupt pixel values in the set D;
s33, acquiring the number nbgt of pixel points with pixel values larger than a set pixel value threshold in the set D;
s34, calculating a detection coefficient:
Figure BDA0004078988390000051
wherein dtvcef is a detection coefficient, ysthr is a preset vertical coordinate difference value, fcthr is a set variance comparison value, bgt is a set of pixel points in the set D, the pixel value of which is greater than a set pixel value threshold value, g i The gray value of the pixel point i in bgt; alpha is the weight of the gray value difference, beta is the weight of the number, and eta is the weight of the variance;
and S35, comparing the detection coefficient with a set coefficient threshold, if the detection coefficient is larger than the set coefficient threshold, conforming the detection window to a preset detection condition, and if the detection coefficient is smaller than or equal to the set coefficient threshold, not conforming the detection window to the preset detection condition.
Compared with the prior art, the method and the device have the advantages that only the pixel points which are included in the detection window are required to be subjected to filtering treatment, all the pixel points are not required to be subjected to filtering treatment, the number of the pixel points which participate in the filtering treatment is greatly reduced, and the acquisition speed of the weld joint coordinates is improved. The filtering process can reduce the influence of noise on the accuracy of the detection coefficient, and improve the accuracy of the detection coefficient when the detection coefficient is used for detecting the pixel points belonging to the line laser.
In the calculation process of the detection coefficient, the detection coefficient capable of accurately detecting the line laser is obtained by carrying out weighted calculation on the ordinate of the pixel point with abrupt change of the pixel value, the number of the pixel points larger than the pixel value threshold and the pixel value variance of the pixel points larger than the pixel value threshold. The arrangement of the difference value of the ordinate can reduce the influence of the hole pixel points of the line laser region on the detection accuracy, if only the pixel values are considered, the line laser of a part of regions is likely to deflect during reflection and not reflected into the imaging device to generate holes, and therefore, the calculated detection coefficient of the detection region cannot accurately represent the region of the line laser.
In one embodiment, for a pixel cgpx, if a gray value difference between a position directly above or directly below cgpx and cgpx is greater than a set abrupt threshold, then cgpx is the pixel where the pixel value is abrupt.
In the invention, the probability that the pixel point with the abrupt change of the pixel value belongs to the starting point or the ending point of the line laser is relatively large, so that the larger the difference value of the ordinate of the pixel point with the abrupt change of the pixel value in the detection window is, the larger the detection coefficient is.
In one embodiment, filtering processing is performed on each pixel point in the detection window to obtain a filtered set D of pixel points, including:
s311, judging whether the pixel points rtpx belong to the pixel points in the filtered set or not for the pixel points rtpx, if so, not performing filter processing on the rtpx, and if not, calculating a filter auxiliary value of the rtpx;
s312, if the filtering auxiliary value is greater than the set auxiliary value threshold, performing filtering processing using the following filtering algorithm:
Figure BDA0004078988390000061
wherein flt (rtpx) is the pixel value of rtpx after rtpx is filtered, nertpx is centered on rtpx, and the side length is ΘCtg is the number of pixel points with abrupt pixel value mutation in nertpx, lambda is the proportional value, lambda epsilon (0, 1), d j And d rtpx Respectively representing the distances between the pixel point j and the pixel point rtpx and the center of the detection window, wherein gtr is a set distance threshold value g j The gray value of the pixel point j;
if the filtering auxiliary value is smaller than or equal to the set auxiliary value threshold value, the following filtering algorithm is used for filtering:
Figure BDA0004078988390000062
wherein flt (rtpx) is a pixel value of rtpx after rtpx is filtered, mdfv represents that gray values of pixel points in a matrix are ordered from small to large, and pixel points in a neighborhood of rtpx are obtained, wherein pixel values of 5 th pixel are arranged, and nei to nei are pixel points of the neighborhood of rtpx.
S313, storing rtpx into the filtered set.
Specifically, the invention avoids repeated filtering calculation of the pixel points through the filtered set in the filtering process, thereby improving the filtering speed. In a specific filtering process, the invention calculates the filtering auxiliary value first, and then selects a corresponding filtering processing mode according to the filtering auxiliary value, so that the filtering processing can be avoided by using only a relatively complex or relatively simple filtering algorithm, and the overall filtering time is reduced while the filtering effect is ensured.
In one embodiment, the filter aid value is calculated as:
Figure BDA0004078988390000071
wherein flthp is a calculation function of the filtering auxiliary value, ny flr is the number of pixel points with a side length theta in a square area with rtpx as a center and a difference value between the square area and a gradient value of rtpx in the vertical direction smaller than a set constant.
When the filtering auxiliary value is larger, the number of pixel points which are close to the rtpx gradient value in the square area is larger, the probability that rtpx belongs to the starting point or the ending point of the line laser is larger, and the filtering processing is performed by adopting a filtering processing mode which can keep the characteristic of the starting point or the ending point. Specifically, in the filtering process, different summation coefficients are adaptively calculated for different pixel points in the square area, and meanwhile, the number of pixel points with abrupt pixel values is considered, so that the purpose of retaining the characteristics of a starting point or an ending point is achieved.
When the filtering auxiliary value is smaller, a faster filtering processing mode is adopted to carry out filtering processing, so that the aim of reducing the whole filtering time is fulfilled.
And S4, adding 1 to the abscissa of the center of the detection window in the step S2 to obtain the abscissa of the center of the new detection window, entering the step S7 if the abscissa of the center of the new detection window is larger than the number of columns of the gray level images, and entering the step S5 if the abscissa of the center of the new detection window is smaller than or equal to the number of columns of the gray level images.
S5, acquiring the ordinate of the new detection window center according to a preset rule, so as to obtain the new detection window center, and entering S2.
In one embodiment, acquiring the ordinate of the center of the new detection window according to a preset rule includes:
let d denote the ordinate of the detection window center in S2, the ordinate d' of the new detection window center is:
Figure BDA0004078988390000072
wherein θ represents an adaptive range parameter,
Figure BDA0004078988390000073
is rounded upward.
In one embodiment, the calculation function for θ is:
Figure BDA0004078988390000074
where ns is the width of the line laser left on the plane when the line laser is emitted vertically at the height Q, E is the maximum effective width of the line laser, and bsn is a constant.
In the above embodiment, after the addition of 1 to the abscissa, by setting the parameter of the adaptive range, it is realized that the start value of the ordinate adaptively changes with the change of the center of the previous detection window, because the duty ratio of the line laser in the whole line laser image is smaller, the number of pixel points entering the detection window can be reduced while the line laser is ensured to be detected.
In one embodiment, vrn is equal to the value of ns.
And S6, subtracting 1 from the ordinate of the center of the detection window in the step S2 to obtain a new center of the detection window, and entering the step S2.
The ordinate minus 1 indicates that the detection window slides down.
S7, respectively marking the pixel points at the centers of 3 detection windows with continuous abscissa and meeting preset detection conditions as px 1 、px 2 、px 3
S8, calculating px 1 、px 2 Slope k between 1 Calculating px 2 、px 3 Slope k between 2 If |k 1 -k 2 I is less than the set slope threshold and k 1 Is negative, k 2 Positive value, px 2 As a change point, the coordinates of the change point are denoted as (x chg ,y chg )。
Since the ordinate of the line laser in the weld is significantly lower than the ordinate of the line laser on both sides, the pixel points belonging to the region of the weld can be detected by setting the positive and negative values of the slope and the difference in the slope.
S9, calculating the ordinate y of the welding line aim
Figure BDA0004078988390000081
The coordinates of the weld are (x) chg ,y aim )。
In the process of processing the linear laser image, the invention can obtain the weld joint coordinates by only carrying out graying operation, filtering operation and window detection operation, and compared with the prior art, the invention reduces the steps of binarization processing, morphological processing, thinning processing and the like, thereby improving the acquisition speed of the weld joint coordinates and further improving the welding efficiency of the welding robot.
Preferably, the device also comprises a transmitting module,
the emission module is used for emitting line laser to the welding seam.
Preferably, the device also comprises an imaging module,
the imaging module is used for imaging the line laser irradiated on the welding seam to obtain a line laser image.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (8)

1. The welding robot control system based on image recognition is characterized by comprising an image processing module, a calculating module and a control module;
the image processing module is used for calculating the linear laser image and acquiring the coordinates of the welding line;
the calculation module is used for calculating the target coordinates of the welding head according to the coordinates of the welding seam;
the control module is used for moving the welding head to the target coordinates;
the method for calculating the linear laser image to obtain the coordinates of the welding line comprises the following steps:
s1, carrying out graying treatment on a linear laser image to obtain a gray image, and establishing a rectangular coordinate system by taking the lower left corner of the gray image as an origin of coordinates;
s2, detecting pixel points positioned in the center of a detection window by using a 1X vrn detection window in a gray level image, wherein vrn represents the number of rows of the detection window, vrn is an odd integer, and 1 represents the number of columns of the detection window;
s3, judging whether the detection window meets preset detection conditions, if so, entering S4, and if not, entering S6;
s4, adding 1 to the abscissa of the center of the detection window in the S2 to obtain the abscissa of the center of the new detection window, entering S7 if the abscissa of the center of the new detection window is larger than the number of columns of the gray level images, and entering S5 if the abscissa of the center of the new detection window is smaller than or equal to the number of columns of the gray level images;
s5, acquiring the ordinate of the new detection window center according to a preset rule, so as to obtain the new detection window center, and entering S2;
s6, subtracting 1 from the ordinate of the center of the detection window in the S2 to obtain a new center of the detection window, and entering the S2;
s7, respectively marking the pixel points at the centers of 3 detection windows with continuous abscissa and meeting preset detection conditions as px 1 、px 2 、px 3
S8, calculating px 1 、px 2 Slope k between 1 Calculating px 2 、px 3 Slope k between 2 If |k 1 -k 2 I is less than the set slope threshold and k 1 Is negative, k 2 Positive value, px 2 As a change point, the coordinates of the change point are denoted as (x chg ,y chg );
S9, calculating the ordinate y of the welding line aim
Figure FDA0004078988380000011
The coordinates of the weld are (x) chg ,y aim )。
2. The welding robot control system of claim 1, further comprising a transmitting module,
the emission module is used for emitting line laser to the welding seam.
3. The welding robot control system of claim 2, further comprising an imaging module,
the imaging module is used for imaging the line laser irradiated on the welding seam to obtain a line laser image.
4. The welding robot control system as recited in claim 1, wherein said calculating the target coordinates of the welding head from the coordinates of the weld joint comprises:
and converting the coordinates of the welding line from the image coordinate system to the world coordinate system to obtain the target coordinates.
5. The welding robot control system as recited in claim 1, wherein graying the line laser image to obtain a gray scale image comprises:
the graying function is as follows:
g(x,y)=w 1 ×r(x,y)+w 2 ×g(x,y)+w 3 ×b(x,y)
where g (x, y) is the gray value, w, of the pixel point at the coordinates (x, y) of the gray image g 1 、w 2 、w 3 The weights of r (x, y), g (x, y), and b (x, y) are respectively expressed, r (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a red component image in an RGB color space, g (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a green component image in the RGB color space, and b (x, y) is a pixel value of a pixel point at a coordinate (x, y) in a blue component image in the RGB color space.
6. The welding robot control system as recited in claim 1, wherein determining whether the detection window meets a preset detection condition comprises:
s31, respectively carrying out filtering treatment on each pixel point in the detection window to obtain a filtered pixel point set D;
s32, calculating a minimum value miy and a maximum value mayof the ordinate of the pixel points with abrupt pixel values in the set D;
s33, acquiring the number nbgt of pixel points with pixel values larger than a set pixel value threshold in the set D;
s34, calculating a detection coefficient:
Figure FDA0004078988380000021
wherein dtvcef is a detection coefficient, ysthr is a preset vertical coordinate difference value, fcthr is a set variance comparison value, bgt is a set of pixel points in the set D, the pixel value of which is greater than a set pixel value threshold value, g i The gray value of the pixel point i in bgt; alpha is the weight of the gray value difference, beta is the weight of the number, and eta is the weight of the variance;
and S35, comparing the detection coefficient with a set coefficient threshold, if the detection coefficient is larger than the set coefficient threshold, conforming the detection window to a preset detection condition, and if the detection coefficient is smaller than or equal to the set coefficient threshold, not conforming the detection window to the preset detection condition.
7. The welding robot control system as recited in claim 1, wherein the acquiring the ordinate of the new detection window center according to the preset rule comprises:
let d denote the ordinate of the detection window center in S2, the ordinate d' of the new detection window center is:
Figure FDA0004078988380000031
wherein θ represents an adaptive range parameter,
Figure FDA0004078988380000032
is rounded upward.
8. The welding robot control system as recited in claim 7, wherein the θ calculation function is:
Figure FDA0004078988380000033
where ns is the width of the line laser left on the plane when the line laser is emitted vertically at the height Q, E is the maximum effective width of the line laser, and bsn is a constant.
CN202310117201.9A 2023-02-15 2023-02-15 Welding robot control system based on image recognition Active CN115990896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310117201.9A CN115990896B (en) 2023-02-15 2023-02-15 Welding robot control system based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310117201.9A CN115990896B (en) 2023-02-15 2023-02-15 Welding robot control system based on image recognition

Publications (2)

Publication Number Publication Date
CN115990896A true CN115990896A (en) 2023-04-21
CN115990896B CN115990896B (en) 2023-06-23

Family

ID=85993442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310117201.9A Active CN115990896B (en) 2023-02-15 2023-02-15 Welding robot control system based on image recognition

Country Status (1)

Country Link
CN (1) CN115990896B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106425181A (en) * 2016-10-24 2017-02-22 南京工业大学 Curve welding seam welding technology based on line structured light
CN106952281A (en) * 2017-05-15 2017-07-14 上海交通大学 A kind of method that weld profile feature recognition and its welding bead are planned in real time
US20180247150A1 (en) * 2017-02-24 2018-08-30 Canon Kabushiki Kaisha Information processing device, information processing method, and article manufacturing method
CN108568624A (en) * 2018-03-29 2018-09-25 东风贝洱热系统有限公司 A kind of mechanical arm welding system and welding method based on image procossing
US20210197384A1 (en) * 2019-12-26 2021-07-01 Ubtech Robotics Corp Ltd Robot control method and apparatus and robot using the same
CN113705590A (en) * 2021-10-28 2021-11-26 江苏南通元辰钢结构制造有限公司 Steel structure intelligent polishing control method based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106425181A (en) * 2016-10-24 2017-02-22 南京工业大学 Curve welding seam welding technology based on line structured light
US20180247150A1 (en) * 2017-02-24 2018-08-30 Canon Kabushiki Kaisha Information processing device, information processing method, and article manufacturing method
CN106952281A (en) * 2017-05-15 2017-07-14 上海交通大学 A kind of method that weld profile feature recognition and its welding bead are planned in real time
CN108568624A (en) * 2018-03-29 2018-09-25 东风贝洱热系统有限公司 A kind of mechanical arm welding system and welding method based on image procossing
US20210197384A1 (en) * 2019-12-26 2021-07-01 Ubtech Robotics Corp Ltd Robot control method and apparatus and robot using the same
CN113705590A (en) * 2021-10-28 2021-11-26 江苏南通元辰钢结构制造有限公司 Steel structure intelligent polishing control method based on artificial intelligence

Also Published As

Publication number Publication date
CN115990896B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN108921865B (en) Anti-interference sub-pixel straight line fitting method
CN107679520B (en) Lane line visual detection method suitable for complex conditions
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
CN109583365B (en) Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve
CN108470356B (en) Target object rapid ranging method based on binocular vision
KR102181861B1 (en) System and method for detecting and recognizing license plate
CN111079518B (en) Ground-falling abnormal behavior identification method based on law enforcement and case handling area scene
CN114714355A (en) Embedded vision tracking control system of autonomous mobile welding robot
CN114241438B (en) Traffic signal lamp rapid and accurate identification method based on priori information
CN113593035A (en) Motion control decision generation method and device, electronic equipment and storage medium
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN116385495A (en) Moving target closed-loop detection method of infrared video under dynamic background
JP2006285956A (en) Red eye detecting method and device, and program
CN115990896B (en) Welding robot control system based on image recognition
CN111192280B (en) Method for detecting optic disc edge based on local feature
CN116091405B (en) Image processing method and device, computer equipment and storage medium
CN110440792B (en) Navigation information extraction method based on small celestial body irregularity
EP4375924A1 (en) Cell alignment degree measurement method, controller, detection system, and storage medium
CN113554672B (en) Camera pose detection method and system in air tightness detection based on machine vision
CN106384103A (en) Vehicle face recognition method and device
CN116805332A (en) Section steel posture distinguishing method based on Hough transformation
CN115239623A (en) Method for detecting contact force of bow net
CN109063689B (en) Face image hairstyle detection method
CN115018751A (en) Crack detection method and system based on Bayesian density analysis
CN111383260A (en) Self-adaptive laser information high-speed detection processing method applied to visible light modality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant