CN115908281A - Weld joint tracking method based on ResNeXt network - Google Patents
Weld joint tracking method based on ResNeXt network Download PDFInfo
- Publication number
- CN115908281A CN115908281A CN202211369451.3A CN202211369451A CN115908281A CN 115908281 A CN115908281 A CN 115908281A CN 202211369451 A CN202211369451 A CN 202211369451A CN 115908281 A CN115908281 A CN 115908281A
- Authority
- CN
- China
- Prior art keywords
- image
- welding seam
- weld
- information
- welding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000003466 welding Methods 0.000 claims abstract description 106
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 11
- 238000003860 storage Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 8
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000001678 irradiating effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 230000003321 amplification Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a welding seam tracking method based on a ResNeXt network, which comprises the following steps: carrying out image preprocessing on the sample welding seam image information to obtain a training set; according to the training set, a learning model is built through a ResNeXt network for training, and a welding seam center feature extraction point model is obtained; acquiring corresponding weld characteristic point information to be identified according to the weld central characteristic extraction point model; generating a welding working path of the robot control module according to the welding seam characteristic point information; and controlling the work of the robot control module according to the welding work path. The invention improves the accuracy and precision and can be widely applied to the technical field of image processing.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a weld joint tracking method based on a ResNeXt network.
Background
Welding is an important processing technology in the field of manufacturing industry, and with the complexity of product design structures and high production efficiency requirements, an automatic welding process gradually replaces a manual welding process by virtue of the advantages of high welding quality, low labor intensity, high efficiency and the like. The accurate seam tracking is a precondition for ensuring welding quality, namely, a welding gun head must be controlled to be always aligned with the center of a seam in the whole welding process, otherwise, scrapping is caused. Therefore, the position of the welding seam needs to be accurately and automatically detected and automatically tracked.
The visual seam tracking is an important subject of the research of the automatic welding robot, namely, a welding gun head is taken as a control object, the real-time position of a welding gun is obtained by a visual method, and the position state of the welding gun is controlled to be adjusted after the operation by an image processing mode. However, images acquired by the vision sensor are inevitably affected by various types of noise such as arc light, smoke dust, splashing and the like, and the welding seam tracking precision is greatly reduced. Therefore, more efficient image recognition techniques are needed to achieve more accurate visual seam tracking.
Disclosure of Invention
In view of this, the embodiment of the invention provides a weld joint tracking method based on the ResNeXt network, which has high accuracy and high precision.
One aspect of the embodiments of the present invention provides a weld tracking method based on a resenext network, including:
collecting sample welding seam image information;
carrying out image preprocessing on the sample welding seam image information to obtain a training set;
according to the training set, a learning model is built through a ResNeXt network for training, and a welding seam center feature extraction point model is obtained;
acquiring corresponding weld characteristic point information to be identified according to the weld central characteristic extraction point model;
generating a welding working path of the robot control module according to the welding seam characteristic point information;
and controlling the work of the robot control module according to the welding work path.
Optionally, the acquiring sample weld image information includes:
irradiating the surface of the workpiece through a line laser to obtain the section profile information of the welding line;
collecting the section outline information of the welding seam through an industrial camera;
the image acquisition card is used for converting the acquired image signals into digital signals and storing the digital signals into the memory of the industrial personal computer, and the size and dimension information of the visual image is determined.
Optionally, the image preprocessing is performed on the sample weld image information to obtain a training set, where the training set includes at least one of:
carrying out image graying processing on the collected sample weld image information;
carrying out image median filtering processing on the collected sample weld image information;
carrying out image scaling processing on the collected sample welding seam image information;
and carrying out image enhancement processing on the acquired sample weld image information.
Optionally, the performing image graying processing on the acquired sample weld image information includes:
converting the brightness of three components in the RGB image in the collected sample welding seam image information as the gray values of three gray images to obtain a gray image;
the image median filtering processing is carried out on the collected sample welding seam image information, and the method comprises the following steps:
placing the image template in a sample weld image for roaming;
the center of the image template is superposed with one pixel position in the sample weld image;
reading the gray values of corresponding pixels under the image template, arranging the gray values into a row from small to large, selecting one of the gray values arranged in the middle and assigning the selected one to the pixel corresponding to the central position of the image template;
the image scaling processing is carried out on the collected sample welding seam image information, and the image scaling processing comprises the following steps:
configuring a zoom ratio of an image in an x-axis direction and a zoom ratio of an image in a y-axis direction;
when the image is subjected to axis-extending amplification, calculating a difference value by adopting a nearest neighbor difference value method;
the image enhancement processing is carried out on the collected sample welding seam image information, and the image enhancement processing comprises the following steps:
enhancing the pixel points in the image through a spatial domain;
wherein the expression of the enhancement processing is:
g(x,y)=f(x,y)*g(x,y)
wherein, f (x, y) represents the original image; h (x, y) is a spatial transfer function; g (x, y) represents the processed image.
Optionally, the constructing a learning model through a resenext network according to the training set to train to obtain a weld center feature extraction point model includes:
in the process of training by building a learning model through a ResNeXt network, reducing the dimension of each branch by using 1 × 1 convolution firstly, then performing 3 × 3 convolution and then performing dimension increasing processing by using 1 × 1 convolution by increasing the number of the branches;
the ResNeXt network structure has 32 branches, each branch corresponds to one kind of transformation, each transformation comprises various different structures, data input into the ResNeXt network are added together after being transformed by each branch, and finally the result is input into a Loss function, so that the updating of each channel coefficient is realized, and the training is repeated for 500 times.
Alternatively, the expression for 1 × 1 convolution processing in a single branch of the ascending dimension is:
the result of all branches is summed up by the expression:
wherein b represents the number of branches (b =1,2, …, 32); i represents the number of channels in the second layer of convolution (i =1,2,3,4); j represents the number of channels in the third layer of convolutions (j =1,2, …, 256);represents the output of the jth channel of the 1 × 1 convolutional layer in the b branch; w is a b,i,j Represents the parameters representing the mapping of the ith input channel of the 1 × 1 convolutional layer in the b branch to the bottom j output channels; />Represents the ith channel output of the 3 multiplied by 3 convolutional layer in the b branch; yj represents the j th channel output value in the summed output of all branches.
In another aspect, an embodiment of the present invention further provides a weld tracking apparatus based on a resenext network, including:
the first module is used for collecting sample welding seam image information;
the second module is used for carrying out image preprocessing on the sample welding seam image information to obtain a training set;
the third module is used for building a learning model through a ResNeXt network according to the training set to train so as to obtain a welding seam center feature extraction point model;
the fourth module is used for acquiring corresponding welding seam feature point information to be identified according to the welding seam center feature extraction point model;
the fifth module is used for generating a welding working path of the robot control module according to the welding seam characteristic point information;
and the sixth module is used for controlling the work of the robot control module according to the welding work path.
Another aspect of the embodiments of the present invention further provides an electronic device, including a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Yet another aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a program, which is executed by a processor to implement the method as described above.
Yet another aspect of embodiments of the present invention provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and the computer instructions executed by the processor cause the computer device to perform the foregoing method.
The embodiment of the invention collects the image information of the welding seam of the sample; carrying out image preprocessing on the sample welding seam image information to obtain a training set; according to the training set, a learning model is built through a ResNeXt network for training, and a welding seam center feature extraction point model is obtained; acquiring corresponding weld characteristic point information to be identified according to the weld central characteristic extraction point model; generating a welding working path of the robot control module according to the welding seam characteristic point information; and controlling the work of the robot control module according to the welding work path. The invention improves the accuracy and precision.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating the overall steps provided by an embodiment of the present invention;
fig. 2 is a diagram of a ResNeXt network structure according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In view of the problems in the prior art, an aspect of the embodiments of the present invention provides a weld tracking method based on a resenext network, including:
collecting sample weld image information;
performing image preprocessing on the sample welding seam image information to obtain a training set;
according to the training set, a learning model is built through a ResNeXt network for training, and a welding seam center feature extraction point model is obtained;
acquiring corresponding weld characteristic point information to be identified according to the weld central characteristic extraction point model;
generating a welding working path of the robot control module according to the welding seam characteristic point information;
and controlling the work of the robot control module according to the welding work path.
Optionally, the acquiring sample weld image information includes:
irradiating the surface of the workpiece through a line laser to obtain the section outline information of the welding seam;
collecting the section outline information of the welding seam through an industrial camera;
and converting the acquired image signals into digital signals through an image acquisition card, storing the digital signals into an internal memory of the industrial personal computer, and determining the size and dimension information of the visual image.
Optionally, the image preprocessing is performed on the sample weld image information to obtain a training set, where the training set includes at least one of:
carrying out image graying processing on the collected sample weld image information;
carrying out image median filtering processing on the collected sample weld image information;
carrying out image scaling processing on the collected sample weld image information;
and carrying out image enhancement processing on the acquired sample welding seam image information.
Optionally, the performing image graying processing on the collected sample weld image information includes:
converting the brightness of three components in the RGB image in the collected sample welding seam image information as the gray values of three gray images to obtain a gray image;
the image median filtering processing is carried out on the collected sample welding seam image information, and the method comprises the following steps:
placing the image template in a sample weld image for roaming;
the center of the image template is superposed with one pixel position in the sample weld image;
reading the gray values of corresponding pixels under the image template, arranging the gray values into a row from small to large, selecting one of the gray values arranged in the middle and assigning the selected one to the pixel corresponding to the central position of the image template;
the image scaling processing is carried out on the collected sample welding seam image information, and the image scaling processing comprises the following steps:
configuring a scaling ratio of an image in an x-axis direction and a scaling ratio of an image in a y-axis direction;
when the image is subjected to axis-extending amplification, calculating a difference value by adopting a nearest neighbor difference value method;
the image enhancement processing is carried out on the collected sample welding seam image information, and the image enhancement processing comprises the following steps:
enhancing the pixel points in the image through a spatial domain;
wherein the expression of the enhancement processing is as follows:
g(x,y)=f(x,y)*h(x,y)
wherein f (x, y) represents the original image; h (x, y) is a spatial transfer function; g (x, y) represents the processed image.
Optionally, the constructing a learning model through a resenext network according to the training set to perform training to obtain a weld center feature extraction point model includes:
in the process of training by building a learning model through a ResNeXt network, reducing the dimension of each branch by using 1 × 1 convolution firstly, then performing 3 × 3 convolution and then performing dimension increasing processing by using 1 × 1 convolution by increasing the number of the branches;
the ResNeXt network structure has 32 branches, each branch corresponds to a transformation, each transformation comprises various different structures, data input into the ResNeXt network are summed together after the transformation of each branch, and finally, the result is input into a Loss function, so that the updating of each channel coefficient is realized, and the training is repeated for 500 times.
Alternatively, the expression for 1 × 1 convolution processing in a single branch of the ascending dimension is:
the sum of the results of all branches is expressed as:
wherein b represents the number of branches (b =1,2, …, 32); i represents the number of channels in the second layer convolution (i =1,2,3,4); j represents the number of channels in the third layer of convolutions (j =1,2, …, 256);represents the output of the jth channel of the 1 × 1 convolutional layer in the b branch; w is a b,i,j Representing parameters representing mapping of the ith input channel of the 1 × 1 convolutional layer in the b branch to the bottom j output channels; />Represents the ith channel output of the 3 multiplied by 3 convolutional layer in the b branch; yj represents the j th channel output value in the summed output of all branches. In another aspect, an embodiment of the present invention further provides a weld tracking apparatus based on a resenext network, including:
the first module is used for collecting sample welding seam image information;
the second module is used for carrying out image preprocessing on the sample welding seam image information to obtain a training set;
the third module is used for building a learning model through a ResNeXt network according to the training set to train so as to obtain a welding seam center feature extraction point model;
the fourth module is used for acquiring corresponding welding seam feature point information to be identified according to the welding seam center feature extraction point model;
the fifth module is used for generating a welding working path of the robot control module according to the welding seam characteristic point information;
and the sixth module is used for controlling the work of the robot control module according to the welding work path.
Another aspect of the embodiments of the present invention further provides an electronic device, including a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Yet another aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a program, which is executed by a processor to implement the method as described above.
Yet another aspect of embodiments of the present invention provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and the computer instructions executed by the processor cause the computer device to perform the foregoing method.
The following detailed description of the embodiments of the invention is provided in conjunction with the accompanying drawings:
as shown in fig. 1, the implementation process of the embodiment of the present invention mainly includes the following steps:
step 1, utilizing a line structured light vision sensor module for collecting welding seam image information
The automatic seam tracking system based on visual sensing mainly comprises an industrial personal computer, a welding gun, a seam sample plate, an industrial camera, a laser, an image acquisition card and the like. Firstly, a line laser irradiates on the surface of a workpiece to reflect the cross-sectional profile of a welding seam in the welding process, an industrial camera acquires welding seam information reflected by the line laser, an image signal acquired by the camera is converted into a digital signal by using an image acquisition card and stored in an internal memory of an industrial control machine, and finally, characteristic information of an underwater image is extracted by using an upper computer detection program and software.
Specifically, the embodiment of the present invention employs a line structured light vision sensor module, which is used for acquiring weld image information, and specifically includes: the line structured light vision sensor module comprises an industrial camera and a line laser, the line laser irradiates on the surface of a workpiece to reflect the section outline of a welding seam, the industrial camera collects the welding seam information reflected by the line laser, an image acquisition card is utilized to convert an image signal acquired by the camera into a digital signal to be stored in an internal memory of an industrial personal computer, and the size and the dimension of a vision image are acquired.
Step 2, preprocessing the collected welding seam image
The main purposes of image preprocessing are to eliminate irrelevant information in images, recover useful real information, enhance the detectability of relevant information, and simplify data to the maximum extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition. The pretreatment process comprises the following steps: the method comprises the steps of (1) graying an image, (2) filtering a median value of the image, (3) zooming the image, (4) enhancing the image.
Specifically, the method for performing graying processing on the acquired color image comprises the following steps:
taking the brightness of three components in the RGB image as the gray values of three gray images:
Gray1(i,j)=R(i,j)
Gray2(i,j)=G(i,j)
Gray2(i,j)=B(i,j)
where gray k (i, j) (k =1,2,3) is the gray scale value of the converted gray scale image at (i, j).
Further, the image median filtering can eliminate image noise and simultaneously save image details, and the specific processing method comprises the following steps: the template is roamed in the graph, and the center of the template is superposed with a certain pixel position in the graph; and reading the gray values of corresponding pixels under the template, arranging the gray values into a row from small to large, and selecting one pixel which is arranged in the middle of the gray values and is assigned to the central position of the corresponding template.
f(x,y)=median[f(x-1,y-1),f(x,y-1),f(x+1,y-1),f(x-1,y),f(x,y),f(x+1,y),f(x-1,y+1),f(x,y+1),f(x+1,y+1)]
Further, the preprocessed image can be matched with the input requirements of the ResNeXt network by scaling the median filtered image. The specific treatment method comprises the following steps:
assume a scaling ratio S in the x-axis direction of the image x Scaling ratio S in the y-axis direction y The corresponding transformation expression is:
the inverse operation is as follows:
and when the image is subjected to axis extension, performing difference by adopting a nearest neighbor difference method.
Further, the processing method for enhancing the zoomed image comprises the following steps: operating the pixel points in the image through a spatial domain, wherein the formula is as follows:
g(x,y)=f(x,y)*h(x,y)
where f (x, y) is the original image, h (x, y) is the spatial conversion function, and g (x, y) represents the processed image.
And 3, building a learning model based on the ResNeXt network and training to obtain a weld joint center feature extraction point model.
As shown in fig. 2, in the specific training process, the resenext network structure of the embodiment of the present invention inputs the preprocessed image into the resenext network having 32 branches, and for each branch, 1 × 1 convolution is used to reduce the dimension, then 3 × 3 convolution is performed, and then 1 × 1 convolution is used to increase the dimension, so that the network can increase the width of the network without increasing the number of parameters, thereby making the model more accurate. And finally, updating the network coefficient through a Loss function, and obtaining a model capable of accurately identifying the central feature point of the welding seam after repeated training.
Specifically, the resenext network structure has a total of C (C = 32) branches, each denoted as T i For a transform, there may be different structures in each transform, and the inputs are summed together after the transform for each branch, so that the transform expression for the multiple branches is:
in addition, a direct connection structure is also arranged, so that the whole module expression is as follows:
when the same branch structure is adopted, the robustness of the model can be improved, and overfitting on a certain data set can be avoided.
Further, let the output of the 3 × 3 convolution beWhere 3 denotes a convolution of 3 × 3, b denotes the number of branches (b =1,2, …, 32), and i denotes the number of channels (i =1,2,3,4). All branches 3X 3After the convolution, a total of 32 × 4=128 h.
Summing the results of all branches to obtain a summed output of all branches:
i.e. each channel of each branch is multiplied by a coefficient and summed. The uniform number k = (b-1) × 4+i is given to each channel of each branch, then the above equation is:
and finally, inputting the result into a Loss function, updating the coefficient of each channel, and repeating the training for 500 times to obtain a model capable of accurately extracting the central feature point of the welding seam.
And 4, receiving the welding seam characteristic point information by the robot control module, obtaining a welding path instruction and carrying out operation.
Specifically, the robot control module receives the weld characteristic point information, obtains a welding path instruction and performs the operation specifically as follows: and processing the welding seam image through a ResNeXt network to obtain welding seam characteristic points, and then converting the welding seam characteristic points into motion information of the robot and transmitting the motion information to the robot control module, so that the robot works according to the detected path.
In conclusion, the invention provides an automatic visual weld tracking method based on a ResNeXt network structure. The method increases the number of branches in the network, utilizes a split conversion merging strategy in a simple and extensible mode, leads the image classification accuracy to be high and the newly added parameters and the calculated amount to be small.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer given the nature, function, and interrelationships of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is to be determined from the appended claims along with their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A weld tracking method based on a ResNeXt network is characterized by comprising the following steps:
collecting sample welding seam image information;
performing image preprocessing on the sample welding seam image information to obtain a training set;
according to the training set, a learning model is built through a ResNeXt network for training, and a welding seam center feature extraction point model is obtained;
acquiring corresponding weld characteristic point information to be identified according to the weld central characteristic extraction point model;
generating a welding working path of the robot control module according to the welding seam characteristic point information;
and controlling the work of the robot control module according to the welding work path.
2. The ResNeXt network-based weld tracking method according to claim 1, wherein the collecting sample weld image information comprises:
irradiating the surface of the workpiece through a line laser to obtain the section profile information of the welding line;
collecting the section outline information of the welding seam through an industrial camera;
and converting the acquired image signals into digital signals through an image acquisition card, storing the digital signals into an internal memory of the industrial personal computer, and determining the size and dimension information of the visual image.
3. The ResNeXt network-based weld tracking method according to claim 1, wherein the image preprocessing is performed on the sample weld image information to obtain a training set, and the training set comprises at least one of the following:
carrying out image graying processing on the collected sample weld image information;
carrying out image median filtering processing on the collected sample welding seam image information;
carrying out image scaling processing on the collected sample weld image information;
and carrying out image enhancement processing on the acquired sample weld image information.
4. The ResNeXt network-based weld tracking method according to claim 3,
the image graying processing of the collected sample welding seam image information comprises the following steps:
converting the brightness of three components in the RGB image in the collected sample welding seam image information as the gray values of three gray images to obtain a gray image;
the image median filtering processing is carried out on the collected sample welding seam image information, and the method comprises the following steps:
placing the image template in a sample weld image for roaming;
the center of the image template is superposed with one pixel position in the sample weld image;
reading the gray value of each corresponding pixel under the image template, arranging the gray values into a row from small to large, and selecting one of the gray values arranged in the middle to assign to the pixel corresponding to the central position of the image template;
the image scaling processing is carried out on the collected sample welding seam image information, and the image scaling processing comprises the following steps:
configuring a scaling ratio of an image in an x-axis direction and a scaling ratio of an image in a y-axis direction;
when the image is subjected to axis extension, performing difference calculation by adopting a nearest neighbor difference method;
the image enhancement processing is carried out on the collected sample welding seam image information, and the image enhancement processing comprises the following steps:
enhancing the pixel points in the image through a spatial domain;
wherein the expression of the enhancement processing is as follows:
g(x,y)=f(x,y)*h(x,y)
wherein f (x, y) represents the original image; h (x, y) is a spatial transfer function; g (x, y) represents the processed image.
5. The welding seam tracking method based on the ResNeXt network as claimed in claim 1, wherein the training is performed by building a learning model through the ResNeXt network according to the training set to obtain a welding seam center feature extraction point model, comprising:
in the process of training by building a learning model through a ResNeXt network, reducing the dimension of each branch by using 1 × 1 convolution firstly, then performing 3 × 3 convolution and then performing dimension increasing processing by using 1 × 1 convolution by increasing the number of the branches;
the ResNeXt network structure has 32 branches, each branch corresponds to a transformation, each transformation comprises various different structures, data input into the ResNeXt network are summed together after the transformation of each branch, and finally, the result is input into a Loss function, so that the updating of each channel coefficient is realized, and the training is repeated for 500 times.
6. The ResNeXt network-based weld tracking method according to claim 5,
the expression for 1 × 1 convolution processing in a single branch of the ascending dimension is:
the result of all branches is summed up by the expression:
wherein b represents the number of branches (b =1,2, …, 32); i represents the number of channels in the second layer of convolution (i =1,2,3,4); j represents the number of channels in the third layer of convolutions (j =1,2, …, 256);represents the output of the jth channel of the 1 × 1 convolutional layer in the b branch; w is a b,i,j Represents the parameters representing the mapping of the ith input channel of the 1 × 1 convolutional layer in the b branch to the bottom j output channels; />Represents the ith channel output of the 3 x 3 convolutional layer in the b branch; yj represents the j th channel output value in the summed output of all branches.
7. A weld tracking device based on the resenext network, comprising:
the first module is used for collecting sample welding seam image information;
the second module is used for carrying out image preprocessing on the sample welding seam image information to obtain a training set;
the third module is used for building a learning model through a ResNeXt network according to the training set to train so as to obtain a welding seam center feature extraction point model;
the fourth module is used for acquiring corresponding welding seam feature point information to be identified according to the welding seam center feature extraction point model;
the fifth module is used for generating a welding working path of the robot control module according to the welding seam characteristic point information;
and the sixth module is used for controlling the work of the robot control module according to the welding work path.
8. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the storage medium stores a program, which is executed by a processor to implement the method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the method according to any of claims 1 to 6 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211369451.3A CN115908281A (en) | 2022-11-03 | 2022-11-03 | Weld joint tracking method based on ResNeXt network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211369451.3A CN115908281A (en) | 2022-11-03 | 2022-11-03 | Weld joint tracking method based on ResNeXt network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115908281A true CN115908281A (en) | 2023-04-04 |
Family
ID=86477645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211369451.3A Pending CN115908281A (en) | 2022-11-03 | 2022-11-03 | Weld joint tracking method based on ResNeXt network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115908281A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435198A (en) * | 2020-12-03 | 2021-03-02 | 西安交通大学 | Welding seam radiographic inspection negative image enhancement method, storage medium and equipment |
CN112529858A (en) * | 2020-12-02 | 2021-03-19 | 南京理工大学北方研究院 | Welding seam image processing method based on machine vision |
CN112756742A (en) * | 2021-01-08 | 2021-05-07 | 南京理工大学 | Laser vision weld joint tracking system based on ERFNet network |
CN113011253A (en) * | 2021-02-05 | 2021-06-22 | 中国地质大学(武汉) | Face expression recognition method, device, equipment and storage medium based on ResNeXt network |
CN113551599A (en) * | 2021-07-22 | 2021-10-26 | 江苏省特种设备安全监督检验研究院 | Welding seam position deviation visual tracking method based on structured light guidance |
-
2022
- 2022-11-03 CN CN202211369451.3A patent/CN115908281A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529858A (en) * | 2020-12-02 | 2021-03-19 | 南京理工大学北方研究院 | Welding seam image processing method based on machine vision |
CN112435198A (en) * | 2020-12-03 | 2021-03-02 | 西安交通大学 | Welding seam radiographic inspection negative image enhancement method, storage medium and equipment |
CN112756742A (en) * | 2021-01-08 | 2021-05-07 | 南京理工大学 | Laser vision weld joint tracking system based on ERFNet network |
CN113011253A (en) * | 2021-02-05 | 2021-06-22 | 中国地质大学(武汉) | Face expression recognition method, device, equipment and storage medium based on ResNeXt network |
CN113551599A (en) * | 2021-07-22 | 2021-10-26 | 江苏省特种设备安全监督检验研究院 | Welding seam position deviation visual tracking method based on structured light guidance |
Non-Patent Citations (2)
Title |
---|
张云佐: "《数字图像处理技术及应用》", 30 April 2022, 北京理工大学出版社, pages: 49 - 50 * |
黄少罗: "《MATLAB 2020图形与图像处理从入门到精通》", 31 January 2021, 机械工业出版社, pages: 228 - 230 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110378879B (en) | Bridge crack detection method | |
KR100421221B1 (en) | Illumination invariant object tracking method and image editing system adopting the method | |
CN111256594B (en) | Method for measuring physical characteristics of surface state of aircraft skin | |
CN110660104A (en) | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium | |
US7747080B2 (en) | System and method for scanning edges of a workpiece | |
CN111553949B (en) | Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning | |
CN110598692B (en) | Ellipse identification method based on deep learning | |
CN111126385A (en) | Deep learning intelligent identification method for deformable living body small target | |
CN110334727B (en) | Intelligent matching detection method for tunnel cracks | |
CN112634159B (en) | Hyperspectral image denoising method based on blind noise estimation | |
CN112819748B (en) | Training method and device for strip steel surface defect recognition model | |
CN110866870B (en) | Super-resolution processing method for amplifying medical image by any multiple | |
CN111179170A (en) | Rapid panoramic stitching method for microscopic blood cell images | |
CN114240845A (en) | Surface roughness measuring method by adopting light cutting method applied to cutting workpiece | |
CN118212240B (en) | Automobile gear production defect detection method | |
CN112597998A (en) | Deep learning-based distorted image correction method and device and storage medium | |
CN117456195A (en) | Abnormal image identification method and system based on depth fusion | |
CN118247711A (en) | Method and system for detecting small target of transducer architecture | |
CN115908281A (en) | Weld joint tracking method based on ResNeXt network | |
CN116934734A (en) | Image-based part defect multipath parallel detection method, device and related medium | |
CN115229374B (en) | Method and device for detecting quality of automobile body-in-white weld seam based on deep learning | |
CN116912158A (en) | Workpiece quality inspection method, device, equipment and readable storage medium | |
CN115861220A (en) | Cold-rolled strip steel surface defect detection method and system based on improved SSD algorithm | |
CN115343364A (en) | Method for quickly positioning welding area of new energy automobile battery busbar | |
CN117934310B (en) | Vascular fluorescence image and RGB image fusion system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230404 |
|
RJ01 | Rejection of invention patent application after publication |