CN113319411A - Visual positioning method and system and computing equipment - Google Patents

Visual positioning method and system and computing equipment Download PDF

Info

Publication number
CN113319411A
CN113319411A CN202110239524.6A CN202110239524A CN113319411A CN 113319411 A CN113319411 A CN 113319411A CN 202110239524 A CN202110239524 A CN 202110239524A CN 113319411 A CN113319411 A CN 113319411A
Authority
CN
China
Prior art keywords
weld
image
coordinates
workpiece
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110239524.6A
Other languages
Chinese (zh)
Inventor
刘坚
陈圣峰
陈兵
苏重阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110239524.6A priority Critical patent/CN113319411A/en
Publication of CN113319411A publication Critical patent/CN113319411A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K9/00Arc welding or cutting
    • B23K9/32Accessories

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Plasma & Fusion (AREA)
  • Mechanical Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a visual positioning method, which is executed in computing equipment and comprises the following steps: acquiring a weld image of the surface of a workpiece; determining weld characteristic points and image coordinates thereof on a weld image based on a weld extraction template; and determining the world coordinates of the weld joint characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the weld joint characteristic points. The invention also discloses a visual positioning system, a visual positioning method executed in the system and a computing device. The visual positioning method can realize the accurate positioning of the diagonal weld coordinates and improve the efficiency of weld extraction and weld positioning.

Description

Visual positioning method and system and computing equipment
Technical Field
The invention relates to the technical field of intelligent welding, in particular to a visual positioning method, a visual positioning system and computing equipment.
Background
At present, workpieces are welded mostly by manual welding. The manual welding has low efficiency and limited productivity, and moreover, the welding quality is uneven, and the finished product is not beautiful. Therefore, in the fields of ships, aerospace, engineering machinery, rail transit and the like, the intelligent welding realized by the welding robot has important significance for improving the welding quality and the welding efficiency.
When the welding robot is used for intelligent welding, the core problem is how to efficiently and accurately position the welding seam coordinate. In the traditional method, a robot teaching method is adopted to position the welding seam, and the robot can only weld according to a preset track, so that the method for positioning the welding seam by teaching is invalid when the size and the positioning of a workpiece are deviated.
In addition, because the fillet weld has various forms, the characteristic expression of the collected fillet weld image is correspondingly various. In the prior art, an extraction and positioning method aiming at fillet weld with strong universality, strong anti-interference capability and high precision does not exist.
For this reason, it is necessary to provide a visual positioning solution for the weld to solve the problems in the above technical solutions.
Disclosure of Invention
To this end, the present invention provides a visual positioning method, a visual positioning system and a computing device to solve or at least alleviate the above existing problems.
According to a first aspect of the present invention, there is provided a visual positioning method, executed in a computing device, comprising the steps of: acquiring a weld image of the surface of a workpiece; determining weld characteristic points and image coordinates thereof on a weld image based on a weld extraction template; and determining the world coordinates of the weld joint characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the weld joint characteristic points.
Optionally, in the visual positioning method according to the present invention, the step of determining world coordinates of the weld feature point based on the image coordinates includes: determining a first conversion function between the image coordinates and the camera coordinates; converting the image coordinates of the weld characteristic points into camera coordinates according to a first conversion function; determining a second conversion function between the coordinates of the camera and the coordinates of the welding gun at the tail end of the robot body; and converting the camera coordinates into welding gun coordinates according to a second conversion function so as to weld the workpiece based on the welding gun coordinates.
Optionally, in the visual positioning method according to the present invention, the weld is a fillet weld, and before determining the weld feature point on the weld image based on the weld extraction template, the method includes the steps of: acquiring a welding seam included angle of the surface of a workpiece; and generating a welding line extraction template corresponding to the welding line included angle.
Optionally, in the visual positioning method according to the present invention, the step of determining the weld feature points on the weld image based on the weld extraction template includes: extracting a characteristic region on the weld image; and matching the characteristic region with a welding seam extraction template, and determining the characteristic point with the highest contact ratio with the welding seam extraction template in the characteristic region as a welding seam characteristic point.
Optionally, in the visual positioning method according to the present invention, the weld image is acquired by projecting structured light onto the surface of the workpiece.
Optionally, in the visual positioning method according to the present invention, the step of extracting the feature region on the weld image includes: carrying out sharpening processing on the welding seam image; carrying out noise reduction and filtering processing on the sharpened image; extracting a structured light center line from the image after filtering processing, and determining a coarse positioning coordinate of the welding seam feature point in the image based on the structured light center line; and extracting a characteristic region based on the rough positioning coordinates.
Optionally, in the visual positioning method according to the present invention, the structured light centerline includes a first structured light centerline and a second structured light centerline that intersect, and the weld feature point is an intersection of the first structured light centerline and the second structured light centerline, wherein the step of determining the coarse positioning coordinates of the weld feature point includes: determining a first linear equation corresponding to the first structured light center line and a second linear equation corresponding to the second structured light center line; and calculating the rough positioning coordinates of the weld joint feature points based on the first linear equation and the second linear equation.
According to a second aspect of the present invention, there is provided a visual positioning system comprising: a structured light device adapted to generate structured light projecting the structured light onto a surface of a workpiece to be welded; the image acquisition device comprises two cameras and a lens connected with the cameras, wherein the two cameras and the lens are symmetrically arranged on two sides of the structured light equipment and are suitable for acquiring a weld image on the surface of a workpiece; and the computing equipment is connected with the camera and is suitable for acquiring the welding seam image, determining the welding seam characteristic points and the image coordinates thereof on the welding seam image based on the welding seam extraction template, and determining the world coordinates of the welding seam characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the welding seam characteristic points.
Optionally, in the visual positioning system according to the present invention, the computing device is adapted to be connected to a robot, the robot includes a robot body and a welding gun installed at a distal end of the robot body, and the computing device is adapted to transmit the world coordinates of the weld feature point to the robot, so that the robot controls the welding gun to perform welding based on the world coordinates of the weld feature point.
According to a third aspect of the present invention, there is provided a visual positioning method, executed in the visual positioning system as described above, comprising the steps of: projecting structured light onto a surface of a workpiece to be welded; collecting a weld image of the surface of a workpiece; determining weld characteristic points and image coordinates thereof on a weld image based on a weld extraction template; and determining the world coordinates of the weld joint characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the weld joint characteristic points.
Optionally, in the visual positioning method according to the present invention, the step of determining world coordinates of the weld feature point based on the image coordinates includes: determining a first conversion function between the image coordinates and the camera coordinates; converting the image coordinates of the weld characteristic points into camera coordinates according to a first conversion function; determining a second conversion function between the coordinates of the camera and the coordinates of the welding gun at the tail end of the robot body; and converting the camera coordinates into welding gun coordinates according to a second conversion function so as to weld the workpiece based on the welding gun coordinates.
According to a fourth aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions that, when read and executed by the processor, cause the computing device to perform the visual positioning method as described above.
According to a fifth aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the visual positioning method as described above.
According to the technical scheme of the invention, the invention provides a visual positioning system and a visual positioning method. The vision positioning system can be matched with a robot and a welding gun thereof to realize intelligent welding of workpieces. The position of the welding seam can be accurately determined according to a visual positioning system so as to guide the movement of a welding gun at the tail end of the robot. According to the visual positioning method, the welding seam image is collected based on the structured light and the binocular camera, the computing equipment can accurately determine the image coordinate of the welding seam feature point according to the welding seam image by obtaining the welding seam image, and the world coordinate of the welding seam is calculated according to the pre-established binocular calibration and hand-eye calibration relation, so that the accurate positioning of the diagonal welding seam coordinate can be realized, and the robot can control the welding gun to perform accurate welding.
Further, according to the visual positioning system, the structured light is emitted to the surface of the workpiece through the structured light equipment, and a high-quality and clear weld image can be acquired through the image acquisition device. Based on the high-quality welding seam image, the method is beneficial to simplifying the algorithm, extracting the welding seam more quickly and efficiently, accurately determining the welding seam coordinate and further improving the precision of welding seam extraction and welding seam positioning.
Further, according to the visual alignment system of the present invention, by attaching the optical filter to the lens, the optical filter can filter the welding arc during the welding of the workpiece, and can avoid the filtering loss of the structured light. Therefore, a clear welding seam image with high quality can be acquired in real time, and interference on welding seam extraction is avoided.
In addition, according to the visual positioning system disclosed by the invention, the incident angle of the structured light can be adjusted, and the shooting angle of the camera can be adjusted, so that the visual positioning system can adapt to various different welding scenes and welding objects, and is wider in application range.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic block diagram of a visual positioning system 100 according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a portion of an image capture device 120 according to an embodiment of the present invention; and
FIG. 3 shows a schematic structural diagram of a structured light device 130 according to one embodiment of the present invention;
FIG. 4 shows a schematic diagram of a visual positioning method 400 (performed in a visual positioning system) according to one embodiment of the present invention;
FIGS. 5 a-5 d illustrate schematic views of a fillet weld, an obtuse angle weld, an acute angle weld, and a flat lap weld, respectively, in accordance with an embodiment of the present invention;
FIGS. 6 a-6 d illustrate weld images corresponding to a fillet weld, an obtuse angle weld, an acute angle weld, and a flat lap weld, respectively, in accordance with an embodiment of the present invention;
FIG. 7 shows a schematic diagram of a computing device 700, according to one embodiment of the invention;
FIG. 8 shows a flow diagram of a visual positioning method 800 (executed in a computing device) according to one embodiment of the invention;
FIG. 9 illustrates a schematic view of a right-angle weld image and its corresponding weld extraction template in accordance with one embodiment of the present invention; and
fig. 10a to 10d respectively show an original weld image, a feature region image, an accurate positioning image of weld feature points, and a matching result image of the feature region and a weld extraction template according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As mentioned above, the prior art vision positioning solution for intelligent welding or automatic welding has certain defects, so the present invention provides a vision positioning system 100 with more optimized performance. The visual positioning system 100 can be applied to intelligent welding equipment, automatic welding equipment and the like, and is suitable for acquiring a weld image on the surface of a workpiece based on a visual method and accurately positioning weld coordinates so as to weld based on the accurate coordinates of the weld. It should be noted that the present invention is not limited to the specific type and configuration of the workpiece, such as but not limited to a structural stud member.
FIG. 1 shows a schematic diagram of a visual positioning system 100 according to an embodiment of the invention.
As shown in FIG. 1, visual positioning system 100 includes a support 110, an image capture device 120 mounted on support 110, and a structured light apparatus 130. The structured light device 130 is used to generate structured light that is projected onto a surface of a workpiece to be welded, the surface of the workpiece having a weld. Here, the weld of the workpiece surface may be a fillet weld having a predetermined included angle, but the present invention is not limited thereto. In addition, the invention does not limit the type and structure of the structured light device.
The image capturing device 120 includes two sets of image capturing components arranged in bilateral symmetry. Specifically, the image pickup device 120 includes two left and right cameras 121, and lenses 122 connected to the two left and right cameras 121, respectively. And, two cameras 121 and lens 122 are symmetrically arranged on the left and right sides of the structured light device 120, and the weld image of the workpiece surface can be collected through the cameras 121 and lens 122. In one embodiment, the camera 121 is, for example, an industrial CCD camera 121, and the lens 122 is, for example, an industrial fixed focus lens 122, but the invention is not limited thereto.
It should be noted that, by emitting the structured light to the surface of the workpiece through the structured light device 130, the structured light will intersect with the surface of the workpiece at the contour line of the weld, so that the image of the surface of the workpiece collected by the image collecting device 120 can clearly display the contour of the weld based on the structured light, and obtain the image of the weld. Thus, the system 100 according to the present invention makes it easier to extract the weld profile and determine the precise coordinates of the weld.
In one embodiment, the structured light generated by the structured light device 130 may be a line structured light, which is based on the advantage of improving the efficiency and accuracy of weld extraction and weld positioning.
In addition, the visual positioning system 100 of the present invention also includes a computing device (not shown in FIG. 1). The computing device is communicatively connected to the camera 121 of the image capture apparatus 120. The camera 121 transmits the weld image to the computing device after acquiring the weld image of the workpiece surface. After the computing device obtains the weld image, the computing device may process the weld image and determine weld coordinates using an image processing algorithm. The invention herein is not limited to the specific algorithms employed by the computing device.
It should be noted that the weld of the surface of the workpiece to be welded according to the present invention may be various types of fillet welds, such as a right angle weld, an obtuse angle weld, an acute angle weld, a flat lap weld, and the like, but is not limited thereto. Two structural light rays can be displayed on a weld image corresponding to the fillet weld, and the intersection point of the central lines of the two structural light rays is a weld characteristic point (a weld inflection point). The weld coordinates determined by the computing device according to the weld image specifically include coordinates of weld feature points.
According to one embodiment, system 100 may be used in conjunction with a welding robot (not shown) to achieve intelligent welding of a workpiece. Specifically, the computing device is communicatively coupled with the robot. After acquiring the weld image acquired by the camera 121 and processing the weld image, the computing device may determine the weld feature points on the weld image based on the weld extraction template, determine the image coordinates of the weld feature points, and determine the world coordinates of the weld feature points based on the image coordinates. Further, the computing device transmits the world coordinates of the weld feature points to the robot so that the robot welds the workpiece based on the world coordinates of the weld feature points.
Further, the robot comprises a robot body and a welding gun installed at the tail end of the robot body. And after the robot acquires the world coordinates of the welding seam feature points, the welding gun is controlled to accurately weld the workpiece based on the world coordinates of the welding seam feature points.
In one embodiment, in the visual positioning system 100 of the present invention, one end of the bracket 110 is provided with a flange 170 for connecting to a robot, and the image capturing device 120 and the structured light device 130 in the system 100 are mounted on the robot body through the bracket 110 by connecting the flange 170 at one end of the bracket 110 to the robot.
In this way, the robot can continuously collect the weld image by drawing the image collecting device 120 and the structured light device 130 to move through the support 110, the computing device determines the weld coordinate (the world coordinate of the weld feature point) according to the weld image by acquiring the weld image, and the robot draws the welding gun to move according to the world coordinate of the weld feature point by sending the world coordinate of the weld feature point to the robot, so as to realize accurate welding of the workpiece. That is, during the welding process of the workpiece, the vision positioning system 100 of the present invention works before and after the welding gun works, the vision positioning system 100 performs the locating function of the welding seam, and after the coordinates of the welding seam (the world coordinates of the characteristic points of the welding seam) are determined, the welding gun at the end of the robot is guided to move, so that the vision positioning system 100 cooperates with the robot and the welding gun to complete the continuous welding process of the workpiece.
Fig. 2 shows a schematic partial structure diagram of the image capturing apparatus 120 according to an embodiment of the present invention. As shown in fig. 2, the image capturing device 120 further includes a filter 123 connected to each lens 122, and the filter 123 is installed at an end of the lens 122 far from the camera 121. It should be noted that, in the welding process of the workpiece, the welding arc is unstable, and the welding spatter affects the appearance effect of the weld contour on the image, which greatly interferes with the extraction of the weld. According to the invention, the optical filter 123 is connected to the lens 122, and the optical filter 123 can filter arc light in the workpiece welding process, so that interference on welding seam images and welding seam extraction is avoided.
FIG. 3 shows a schematic structural diagram of a structured light device 130 according to one embodiment of the present invention. As shown in fig. 3, the structured light apparatus 130 includes a laser 131, and an optical lens 132 mounted at one end portion of the laser 131. Here, the optical lens 132 includes a plurality of grating sheets, the laser 131 is configured to generate laser light and emit the laser light, and the laser light emitted by the laser 131 forms structured light after passing through the optical lens 132. Specifically, after passing through a plurality of grating sheets, the laser changes the angle, brightness and thickness of the laser, and the formed structured light is parallel stripe light.
It should be noted that the present invention does not limit the specific number of the plurality of optical gratings included in the optical lens 132. In one embodiment, the optical lens 132 includes 3 to 5 grating plates.
In the image capturing device 120 of the present invention, the filter 123 mounted on the lens 122 not only filters the welding arc during the welding process of the workpiece, but also avoids the loss of the structured light after passing through the filter 123 as much as possible, so as to capture a high-quality and clearer weld image. Further, the filter 123 should avoid the wavelength range of the welding arc so as to block the passage of the welding arc and to filter out the welding arc to the maximum extent. The filter 123 should overlap the wavelength range of the structured light as much as possible so that the structured light passes through the filter 123 and the loss of the filter of the structured light is avoided as much as possible.
According to one embodiment, the laser 131 is adapted to emit red laser light with a wavelength of 630 nm. The filter 123 is a red filter 123 adapted to pass red laser light having a wavelength of 630 nm. In other words, the filter 123 provides minimal filtering of the 630nm wavelength red laser light, thereby minimizing structured light loss. The wavelength (630nm) avoids the wavelength range of the welding arc to the maximum extent, thereby filtering the welding arc to the maximum extent and reducing the interference of the welding arc on the welding seam image and the welding seam extraction as much as possible.
It should be further noted that, as shown in fig. 1, the left and right cameras 121 in the image capturing device 120 are installed at a predetermined included angle, that is, the axial lines of the two cameras 121 form a predetermined included angle, so that the two cameras can simulate the human eye function, so as to capture the three-dimensional information of the weld joint of the workpiece based on the binocular calibration method. The symmetry axes of the two cameras 121 are on the same plane as the axis of the laser 131. It should be noted that the present invention is not limited to the angle between the two cameras, and may be, for example, a 30 ° angle.
According to one embodiment, as shown in fig. 1 and 3, the structured light device 130 further comprises a first mounting plate 136 and a holding clamp 135, and the structured light device 130 can be mounted on the bracket 110 through the first mounting plate 136 and the holding clamp 135. Specifically, the first mounting plate 136 is fixedly connected to the bracket 110, and the first mounting plate 136 is provided with an arc-shaped groove 137. The inner side of the clasping clamp 135 is an arc-shaped structure matched with the shape of the laser 131, so that the clasping clamp 135 can stably clamp the outer side wall of the laser 131, and the clasping clamp 135 is connected with the arc-shaped groove 137 on the first mounting plate 136, and the laser 131 is fixedly mounted on the support based on the first mounting plate. It should be noted that, based on the arc-shaped groove 137 formed in the first mounting plate 136, the mounting position of the laser 131 can be adjusted by adjusting the connection position between the clasping fixture 135 and the arc-shaped groove 137, so as to adjust the incident angle of the structured light projected by the structured light device 130 onto the surface of the workpiece. This may accommodate a variety of different welding scenarios and welding objects, allowing the system 100 of the present invention to have a wider range of applications.
In one embodiment, as shown in fig. 1, the bracket 110 includes two side plates 115, and the side plates 115 are provided with a first transverse waist hole 116. The first mounting plate 136 is coupled to the first waist aperture 116 of the side plate 115.
According to one embodiment, as shown in FIG. 1, each camera 121 is connected to the bracket 110 by a second mounting plate 126. The second mounting plate 126 is provided with a second longitudinal waist hole, and the camera 121 is connected with the second waist hole. Based on the second waist hole opened on the second mounting plate 126, the shooting angle of the camera 121 can be adjusted by adjusting the connection position of the camera 121 and the second waist hole.
According to the above embodiment, the system 100 of the present invention can adjust the incident angle of the structured light and the shooting angle of the camera, so as to adapt to various different welding scenes and welding objects, and the application range is wider.
According to the visual positioning system, the intelligent welding of the workpiece can be realized by matching with the robot and the welding gun thereof. The visual positioning system can accurately determine the weld position so as to guide the movement of the welding gun at the tail end of the robot. Further, according to the visual positioning system, the structured light is emitted to the surface of the workpiece through the structured light equipment, and a high-quality and clear weld image can be acquired through the image acquisition device. Based on the high-quality welding line image, the method is beneficial to simplifying the algorithm, extracts the welding line contour line more quickly and efficiently, accurately determines the welding line coordinate position and improves the precision of welding line extraction and welding line positioning.
FIG. 4 shows a schematic diagram of a visual localization method 400 according to one embodiment of the present invention. The method 400 is performed in the visual positioning system 100 as described above, and enables the acquisition of weld images and the determination of precise coordinates of welds from the weld images.
As shown in fig. 4, the method 400 begins at step S410. In step S410, structured light is projected to a surface of a workpiece to be welded. Step S410 is performed by the structured light device 130 in the system 100.
Subsequently, in step S420, a bead image of the workpiece surface is acquired. Step S420 is performed by the image capture device 120 in the system 100. The camera 121 in the image capturing device 120 may transmit the captured weld image to a computing device connected thereto.
Subsequently, in step S430, the weld feature points on the weld image and the image coordinates of the weld feature points are determined based on the weld extraction template. Step S430 is performed by a computing device in system 100.
It should be noted that, by projecting the structured light onto the surface of the workpiece to be welded and then collecting the weld image, the structured light at the intersection with the weld can be displayed on the weld image. The weld of the surface of the workpiece to be welded according to the present invention may be various types of fillet welds such as, but not limited to, a right angle weld, an obtuse angle weld, an acute angle weld, a flat lap weld, and the like. Here, two structural light rays can be displayed on the weld image corresponding to the fillet weld, and the weld characteristic point is the intersection point of the central lines of the two structural light rays on the weld image.
Fig. 5a to 5d are schematic views respectively showing a right angle weld, an obtuse angle weld, an acute angle weld, and a flat plate lap weld according to an embodiment of the present invention. Fig. 6a to 6d show weld images corresponding to a right angle weld, an obtuse angle weld, an acute angle weld, and a flat lap weld, respectively, according to an embodiment of the present invention.
According to one embodiment, the computing device may generate a weld extraction template corresponding to the weld included angle according to a template generation method by obtaining the weld included angle of the workpiece surface. And further determining the weld characteristic points on the weld image and the image coordinates of the weld characteristic points according to the generated weld extraction template. Finally, in step S440, the world coordinates of the weld feature points are determined based on the image coordinates, so that the workpiece is welded based on the world coordinates of the weld feature points. Step S440 is performed by a computing device in system 100. The computing equipment sends the world coordinates of the weld joint feature points to the robot, and the robot can control the welding gun to move according to the world coordinates of the weld joint feature points, so that the workpiece is welded.
According to one embodiment, the step of determining world coordinates of the weld feature points based on the image coordinates comprises: determining a first conversion function between the image coordinates and the camera coordinates; converting the image coordinates of the weld characteristic points into camera coordinates according to a first conversion function; determining a second conversion function between the coordinates of the camera and the coordinates of the welding gun at the tail end of the robot body; and according to the second conversion function, converting the camera coordinates into welding gun coordinates so as to weld the workpiece based on the welding gun coordinates. Here, the specific method of coordinate transformation according to the first transformation function and the second transformation function will be described in detail in the following in the visual positioning method 800 executed by the computing device.
It should also be noted that the specific execution logic of steps S430 and S440 performed by the computing device in method 400 is described in detail in the following visual positioning method 800.
FIG. 7 shows a schematic diagram of a computing device 700, according to one embodiment of the invention.
It should be noted that the computing device 700 shown in fig. 7 is only an example, and in practice, the computing device for implementing the image capturing method of the present invention may be any type of device, and the hardware configuration thereof may be the same as or different from that of the computing device 700 shown in fig. 7. In practice, the computing device for implementing the image capturing method of the present invention may add or delete hardware components of the computing device 700 shown in fig. 7, and the present invention does not limit the specific hardware configuration of the computing device.
As shown in fig. 7, in a basic configuration 702, a computing device 700 typically includes a system memory 706 and one or more processors 704. A memory bus 708 may be used for communicating between the processor 704 and the system memory 706.
Depending on the desired configuration, the processor 704 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. Processor 704 may include one or more levels of cache, such as a level one cache 710 and a level two cache 712, a processor core 714, and registers 716. Example processor core 714 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 718 may be used with the processor 704, or in some implementations the memory controller 718 may be an internal part of the processor 704.
Depending on the desired configuration, the system memory 706 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The system memory 706 may include an operating system 720, one or more programs 722, and program data 724. In some implementations, the program 722 can be arranged to execute instructions on the operating system by the one or more processors 704 using the program data 724.
The computing device 700 may also include an interface bus 740 that facilitates communication from various interface devices (e.g., output devices 742, peripheral interfaces 744, and communication devices 746) to the basic configuration 702 via the bus/interface controller 730. The example output devices 742 include a graphics processing unit 748 and an audio processing unit 750. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 752. Example peripheral interfaces 744 can include a serial interface controller 754 and a parallel interface controller 756, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 758. An example communication device 746 may include a network controller 760, which may be arranged to facilitate communications with one or more other computing devices 762 over a network communication link via one or more communication ports 764.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In a computing device 700 according to the present invention, the application 722 includes instructions for performing the visual localization method 800, which may instruct the processor 704 to perform the visual localization method 800 of the present invention, such that the computing device determines the precise location of the weld by performing the method 800 of the present invention.
FIG. 8 shows a flow diagram of a visual positioning method 800 according to one embodiment of the invention. Method 800 is performed in a computing device, such as computing device 700 described above. As shown in fig. 8, the method 800 begins at step S810.
In step S810, a bead image of the workpiece surface is acquired. Here, it is the computing device that acquires the weld image acquired by the image acquisition device 120. As described above, the weld image is transmitted to the computing device by the camera 121 by projecting the structured light to the surface of the workpiece to be welded and then capturing the weld image of the surface of the workpiece by the image capturing device 120.
Subsequently, in step S820, a bead extraction template is generated, and the bead feature points on the bead image and the image coordinates of the bead feature points are determined based on the bead extraction template.
As previously mentioned, the weld of the present invention is a fillet weld, such as, but not limited to, a right angle weld, an obtuse angle weld, an acute angle weld, a flat lap weld, and the like. The weld image is acquired by projecting the structured light to the surface of the workpiece to be welded, so that two structured light rays can be displayed on the weld image corresponding to the fillet weld, and the intersection point of the central lines of the two structured light rays on the weld image is the weld characteristic point (corresponding to the weld inflection point).
It should be noted that different included weld angles correspond to different weld extraction templates. FIG. 9 illustrates a right-angle weld image and its corresponding weld extraction template image, according to one embodiment of the present invention.
In one embodiment, when initially building a weld extraction template, the included weld angle θ of the workpiece surface is first measuredwThen, the included angle theta of the weld joint is calculatedwWith structured light in the weld imageLine bending angle thetaiA functional relationship theta betweeni=f(θw) Finally, according to the structure light bending angle thetaiA corresponding weld extraction template M (x, y) is established.
Preferably, the actual included angle theta is determinedwAngle of bending theta with structural lightiA functional relationship theta betweeni=f(θw) Then, the generation method of the welding seam extraction template can be written into a general program, so that the subsequent computing equipment only needs to acquire the included angle theta of the welding seamwThen, a weld extraction template corresponding to the included angle of the weld can be generated according to the template generation method.
That is, in step S820, the computing device generates a bead extraction template corresponding to the bead angle according to the template generation method by acquiring the bead angle of the workpiece surface. And further determining the weld characteristic points on the weld image and the image coordinates of the weld characteristic points according to the generated weld extraction template.
Finally, in step S830, the world coordinates of the weld feature points are determined based on the image coordinates, so that the workpiece is welded based on the world coordinates of the weld feature points. Here, the computing device transmits the world coordinates of the weld feature points to the robot, and the robot can control the welding gun to move according to the world coordinates of the weld feature points, so that the workpiece is welded.
According to one embodiment, a conversion function between world coordinates (three-dimensional coordinates) and image coordinates (two-dimensional coordinates) is established in advance in the computing device, so that the image coordinates can be converted into world coordinates based on the conversion function. It should be noted that the three-dimensional coordinates in the world coordinate system include camera coordinates, welding gun coordinates of the robot body end. A conversion function is also established between the camera coordinates and the welding gun coordinates. Here, the relationship between the camera coordinates and the image coordinates may be expressed by a first conversion function. The relationship between the camera coordinates and the torch coordinates may be represented by a second transfer function.
The world coordinates determined in step S830 may be welding torch coordinates in a world coordinate system. The robot controls the welding gun to weld the workpiece by acquiring welding gun coordinates of the welding seam feature points from the computing device and based on the welding gun coordinates.
Specifically, the method for determining the world coordinates of the weld feature points based on the image coordinates may be performed according to the following steps:
first, a first conversion function between camera coordinates and image coordinates is determined.
Subsequently, the image coordinates of the weld feature points are converted into camera coordinates according to a first conversion function.
It should be noted that, the image capturing device 120 of the present invention can simulate the human eye function by providing the left and right cameras 121 forming a predetermined included angle, and can achieve the effect of extracting three-dimensional information of a spatial object. Prior to performing the method 800 of the present invention, the computing device needs to determine a rotation matrix R between the left and right camerasLAnd translation matrix TLRotation matrix R between right and left camerasRAnd translation matrix TRAnd establishing a first conversion function between the camera coordinates and the image coordinates by determining a rotation matrix R and a translation matrix T between the left camera and the right camera, namely, establishing a binocular calibration relation.
Further, a second transfer function between the camera coordinates and the welding gun coordinates of the robot body end is determined. A rotation matrix and a translation matrix between the camera and the welding gun may be determined from the second translation function. It is noted that prior to performing the method 800 of the present invention, the computing device determines the rotation matrix R between the camera (e.g., left side camera) and the welding gunHAnd translation matrix THTo establish a second conversion function. Specifically, a hand-eye calibration equation AX between the camera and the welding gun is established as XB, and a two-step method is used to respectively solve a rotation relation and a translation relation between a camera coordinate and a welding gun coordinate, so as to obtain a second conversion function for converting the camera coordinate and the welding gun coordinate, that is, the hand-eye calibration relation.
And finally, converting the camera coordinates into welding gun coordinates according to a second conversion function, so that the robot acquires the welding gun coordinates of the welding seam characteristic points from the computing equipment and controls a welding gun to weld the workpiece based on the welding gun coordinates.
Fig. 10a to 10d respectively show an original weld image, a feature region image, an accurate positioning image of weld feature points, and a matching result image of the feature region and a weld extraction template according to an embodiment of the present invention.
According to an embodiment of the present invention, as shown in fig. 10a to 10d, when determining a weld feature point on a weld image based on a weld extraction template, a feature region including weld features on the weld image is first extracted, and the feature region is matched with the weld extraction template by moving the weld extraction template in parallel on the feature region image, so as to determine a feature point with the highest coincidence degree with the weld extraction template in the feature region, that is, a precisely-located weld feature point. By determining the image coordinates of the weld joint feature points, the accurate coordinates of the weld joint feature points can be obtained. It should be noted that matching with the weld extraction template by extracting the feature region is beneficial to improving the efficiency and the accuracy of extracting the weld feature points.
According to one embodiment, the weld extraction template is denoted as MB (m, n) and the feature region image is denoted as Fm(w, h). The weld extraction template MB (m, n) is at Fm(w, h) when searching, the coordinate of the upper left corner is (i, j), the search range is 1-j, w-m, 1-j, h-n. The contact ratio calculation formula of the characteristic region and the welding line extraction template can be expressed as
Figure BDA0002961637430000141
The feature point at which the coincidence degree is highest can be expressed as (X)c,Yc)=find(CD(i,j)=max(CD(i,j))。
According to one embodiment, when extracting the characteristic region on the weld image, the weld image needs to be sharpened, and the sharpened image needs to be subjected to noise reduction and filtering. By sharpening the welding seam image, the intersection point of two structural light rays can be highlighted, the interference of noises such as ambient light noise, workpiece color, stain and rust is inhibited, and the accuracy and precision of determining the welding seam position according to the welding seam image are improved. In particular embodiments, the camera may be a color industrial camera and the structured light may beBy using green structured light, the acquired weld image is represented as H (x, y), and the red component of the weld image is represented as HR(x, y), the green component of the weld image is denoted as HG(x, y), the blue component of the weld image is denoted as HB(x, y), the sharpened weld image may be represented as HS(x,y)=HG(x,y)-HB(x,y)+HG(x,y)-HR(x,y)。
And further, extracting two structured light center lines from the image after filtering processing, and determining the rough positioning coordinates of the weld characteristic points according to the linear equation of the structured light center lines. Here, the two structured light center lines on the weld image are the first structured light center line and the second structured light center line, respectively, and the intersection point of the first structured light center line and the second structured light center line is the weld feature point. By determining a first linear equation corresponding to the first structural light center line and a second linear equation corresponding to the second structural light center line, the intersection point coordinate of the two straight lines can be calculated based on the first linear equation and the second linear equation, and the intersection point coordinate is the rough positioning coordinate of the weld joint feature point. According to one embodiment, slope k in the image is extracted using hough transform11<k<k11And calculating the equation of the line, i.e. the first equation of the line, where k1Is the slope of the optical centerline of the first structure, ε1Is k1And (4) deviation range. Correspondingly, slope k in the image is extracted by using hough transformation22<k<k22And calculating the equation of the line, i.e. a second equation of the line, where k is2Is the slope of the second structured-light centerline, ε2Is k2And (4) deviation range.
And finally, extracting a characteristic region according to the rough positioning coordinates of the weld characteristic points. Further, a feature region is extracted from the filtered weld image based on the rough positioning coordinates of the weld feature point in the weld image and the image coordinates (precise coordinates) of the weld feature point in the previous weld image. For example, the rough positioning coordinates of the weld feature points of the weld image are comprehensively used(X0,Y0) And the exact image coordinates (X) in the last weld imageC-1,YC-1) The feature area is truncated for the center.
According to the visual positioning method, the welding seam images are collected based on the structured light and the binocular camera, the computing equipment can accurately determine the image coordinates of the characteristic points of the welding seam according to the welding seam images by obtaining the welding seam images collected based on the structured light and the binocular camera, and the world coordinates of the welding seam are calculated according to the pre-established binocular calibration and hand-eye calibration relations (the first conversion function and the second conversion function), so that the accurate positioning of the diagonal welding seam is realized.
A7, the method of a6, the structured light centerline comprising a first structured light centerline and a second structured light centerline that intersect, the weld feature point being an intersection of the first structured light centerline and the second structured light centerline, wherein the step of determining coarse positioning coordinates of the weld feature point comprises: determining a first linear equation corresponding to the first structured light center line and a second linear equation corresponding to the second structured light center line; and calculating the rough positioning coordinates of the weld joint feature points based on the first linear equation and the second linear equation.
B9, the visual positioning system according to B8, wherein the computing device is suitable for being connected with a robot, the robot comprises a robot body and a welding gun installed at the tail end of the robot body, and the computing device is suitable for sending the world coordinates of the welding seam feature points to the robot, so that the robot controls the welding gun to weld based on the world coordinates of the welding seam feature points.
C11, the method as claimed in C10, wherein the step of determining world coordinates of weld feature points based on the image coordinates comprises: determining a first conversion function between the image coordinates and the camera coordinates; converting the image coordinates of the weld characteristic points into camera coordinates according to a first conversion function; determining a second conversion function between the coordinates of the camera and the coordinates of the welding gun at the tail end of the robot body; and converting the camera coordinates into welding gun coordinates according to a second conversion function so as to weld the workpiece based on the welding gun coordinates.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the multilingual spam-text recognition method of the present invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
In the description of the present specification, the terms "connected", "fixed", and the like are to be construed broadly unless otherwise explicitly specified or limited. Furthermore, the terms "upper", "lower", "inner", "outer", "top", "bottom", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred device or unit must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A visual positioning method, executed in a computing device, comprising the steps of:
acquiring a weld image of the surface of a workpiece;
determining weld characteristic points and image coordinates thereof on a weld image based on a weld extraction template;
and determining the world coordinates of the weld joint characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the weld joint characteristic points.
2. The method of claim 1, wherein determining world coordinates of weld feature points based on the image coordinates comprises:
determining a first conversion function between the image coordinates and the camera coordinates;
converting the image coordinates of the weld characteristic points into camera coordinates according to a first conversion function;
determining a second conversion function between the coordinates of the camera and the coordinates of the welding gun at the tail end of the robot body;
and converting the camera coordinates into welding gun coordinates according to a second conversion function so as to weld the workpiece based on the welding gun coordinates.
3. The method according to claim 1 or 2, wherein the weld is a fillet weld, comprising, before determining weld feature points on the weld image based on the weld extraction template, the steps of:
acquiring a welding seam included angle of the surface of a workpiece;
and generating a welding line extraction template corresponding to the welding line included angle.
4. The method of any of claims 1-3, wherein determining weld feature points on the weld image based on the weld extraction template comprises:
extracting a characteristic region on the weld image;
and matching the characteristic region with a welding seam extraction template, and determining the characteristic point with the highest contact ratio with the welding seam extraction template in the characteristic region as a welding seam characteristic point.
5. The method of any of claims 1-4, wherein the weld image is acquired after projecting structured light onto the workpiece surface.
6. The method of claim 4 or 5, wherein the step of extracting the feature region on the weld image comprises:
carrying out sharpening processing on the welding seam image;
carrying out noise reduction and filtering processing on the sharpened image;
extracting a structured light center line from the image after filtering processing, and determining a coarse positioning coordinate of the welding seam feature point in the image based on the structured light center line;
and extracting a characteristic region based on the rough positioning coordinates.
7. A visual positioning system, comprising:
a structured light device adapted to generate structured light projecting the structured light onto a surface of a workpiece to be welded;
the image acquisition device comprises two cameras and a lens connected with the cameras, wherein the two cameras and the lens are symmetrically arranged on two sides of the structured light equipment and are suitable for acquiring a weld image on the surface of a workpiece;
and the computing equipment is connected with the camera and is suitable for acquiring the welding seam image, determining the welding seam characteristic points and the image coordinates thereof on the welding seam image based on the welding seam extraction template, and determining the world coordinates of the welding seam characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the welding seam characteristic points.
8. A visual positioning method, performed in the visual positioning system of claim 7, comprising the steps of:
projecting structured light onto a surface of a workpiece to be welded;
collecting a weld image of the surface of a workpiece;
determining weld characteristic points and image coordinates thereof on a weld image based on a weld extraction template;
and determining the world coordinates of the weld joint characteristic points based on the image coordinates so as to weld the workpiece based on the world coordinates of the weld joint characteristic points.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions;
the program instructions, when read and executed by the processor, cause the computing device to perform the method of any of claims 1-6.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-6.
CN202110239524.6A 2021-03-04 2021-03-04 Visual positioning method and system and computing equipment Pending CN113319411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110239524.6A CN113319411A (en) 2021-03-04 2021-03-04 Visual positioning method and system and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110239524.6A CN113319411A (en) 2021-03-04 2021-03-04 Visual positioning method and system and computing equipment

Publications (1)

Publication Number Publication Date
CN113319411A true CN113319411A (en) 2021-08-31

Family

ID=77414457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110239524.6A Pending CN113319411A (en) 2021-03-04 2021-03-04 Visual positioning method and system and computing equipment

Country Status (1)

Country Link
CN (1) CN113319411A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113894481A (en) * 2021-09-09 2022-01-07 中国科学院自动化研究所 Method and device for adjusting welding pose of complex space curve welding seam
CN113996918A (en) * 2021-11-12 2022-02-01 中国航空制造技术研究院 Double-beam laser welding T-shaped joint seam detection device and method
CN116408575B (en) * 2021-12-31 2024-06-04 广东美的白色家电技术创新中心有限公司 Method, device and system for locally scanning and eliminating workpiece reflection interference

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58157588A (en) * 1982-03-12 1983-09-19 Mitsui Eng & Shipbuild Co Ltd Detection of change in position of weld line
CN103759648A (en) * 2014-01-28 2014-04-30 华南理工大学 Complex fillet weld joint position detecting method based on laser binocular vision
CN104014905A (en) * 2014-06-06 2014-09-03 哈尔滨工业大学 Observation device and method of three-dimensional shape of molten pool in GTAW welding process
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN109492688A (en) * 2018-11-05 2019-03-19 深圳步智造科技有限公司 Welding seam tracking method, device and computer readable storage medium
CN109658456A (en) * 2018-10-29 2019-04-19 中国化学工程第六建设有限公司 Tank body inside fillet laser visual vision positioning method
CN109986172A (en) * 2019-05-21 2019-07-09 广东工业大学 A kind of weld and HAZ method, equipment and system
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method
CN112304951A (en) * 2019-08-01 2021-02-02 唐山英莱科技有限公司 Visual detection device and method for high-reflection welding seam through binocular single-line light path

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58157588A (en) * 1982-03-12 1983-09-19 Mitsui Eng & Shipbuild Co Ltd Detection of change in position of weld line
CN103759648A (en) * 2014-01-28 2014-04-30 华南理工大学 Complex fillet weld joint position detecting method based on laser binocular vision
CN104014905A (en) * 2014-06-06 2014-09-03 哈尔滨工业大学 Observation device and method of three-dimensional shape of molten pool in GTAW welding process
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN109658456A (en) * 2018-10-29 2019-04-19 中国化学工程第六建设有限公司 Tank body inside fillet laser visual vision positioning method
CN109492688A (en) * 2018-11-05 2019-03-19 深圳步智造科技有限公司 Welding seam tracking method, device and computer readable storage medium
CN109986172A (en) * 2019-05-21 2019-07-09 广东工业大学 A kind of weld and HAZ method, equipment and system
CN112304951A (en) * 2019-08-01 2021-02-02 唐山英莱科技有限公司 Visual detection device and method for high-reflection welding seam through binocular single-line light path
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113894481A (en) * 2021-09-09 2022-01-07 中国科学院自动化研究所 Method and device for adjusting welding pose of complex space curve welding seam
CN113996918A (en) * 2021-11-12 2022-02-01 中国航空制造技术研究院 Double-beam laser welding T-shaped joint seam detection device and method
CN116408575B (en) * 2021-12-31 2024-06-04 广东美的白色家电技术创新中心有限公司 Method, device and system for locally scanning and eliminating workpiece reflection interference

Similar Documents

Publication Publication Date Title
CN113319411A (en) Visual positioning method and system and computing equipment
CN110539109B (en) Robot automatic welding system and method based on single-binocular vision
Chen et al. The autonomous detection and guiding of start welding position for arc welding robot
Tsai et al. Machine vision based path planning for a robotic golf club head welding system
CN109658456A (en) Tank body inside fillet laser visual vision positioning method
CN109604777A (en) Welding seam traking system and method based on laser structure light
CN113894481B (en) Welding pose adjusting method and device for complex space curve welding seam
CN111055054A (en) Welding seam identification method and device, welding robot and storage medium
CN108032011B (en) Initial point guiding device and method are stitched based on laser structure flush weld
CN113333998A (en) Automatic welding system and method based on cooperative robot
CN113798634B (en) Method, system and equipment for teaching spatial circular weld and tracking weld
Hou et al. A teaching-free welding method based on laser visual sensing system in robotic GMAW
CN110315249A (en) Space right-angle weld seams shaped zigzag line based on line laser structured light is fitted system
CN112743270B (en) Robot welding assembly method and system based on 2D/3D visual positioning
Shah et al. A review paper on vision based identification, detection and tracking of weld seams path in welding robot environment
Banafian et al. Precise seam tracking in robotic welding by an improved image processing approach
Zhou et al. Intelligent guidance programming of welding robot for 3D curved welding seam
CN108788467A (en) A kind of Intelligent Laser welding system towards aerospace structural component
Liu et al. Seam tracking system based on laser vision and CGAN for robotic multi-layer and multi-pass MAG welding
Guo et al. Progress, challenges and trends on vision sensing technologies in automatic/intelligent robotic welding: State-of-the-art review
CN115018813A (en) Method for robot to autonomously identify and accurately position welding line
CN114260625A (en) Method for welding intersecting line of circular tube, welding equipment and storage medium
MacMillan et al. Planar image-space trajectory planning algorithm for contour following in robotic machining
CN117300464A (en) Intersecting line weld detection and track optimization system and method based on structured light camera
CN107790853A (en) Ship&#39;s ladder robot broken line angle welding intelligent identification device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210831