CN117636109A - Weld joint identification method, weld joint identification network training method, equipment and storage medium - Google Patents

Weld joint identification method, weld joint identification network training method, equipment and storage medium Download PDF

Info

Publication number
CN117636109A
CN117636109A CN202311458720.8A CN202311458720A CN117636109A CN 117636109 A CN117636109 A CN 117636109A CN 202311458720 A CN202311458720 A CN 202311458720A CN 117636109 A CN117636109 A CN 117636109A
Authority
CN
China
Prior art keywords
feature
weld
features
sample
workpiece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311458720.8A
Other languages
Chinese (zh)
Inventor
刘贤柱
李辉
邱强
谢梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruben Technology Co ltd
Original Assignee
Shenzhen Ruben Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruben Technology Co ltd filed Critical Shenzhen Ruben Technology Co ltd
Priority to CN202311458720.8A priority Critical patent/CN117636109A/en
Publication of CN117636109A publication Critical patent/CN117636109A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a weld joint identification method, a weld joint identification network training method, equipment and a storage medium, wherein the method comprises the following steps: acquiring a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information; respectively extracting features of the color image and the reference image by utilizing a weld joint identification network to obtain initial features and reference features of a workpiece to be welded; performing feature fusion based on the initial feature and the reference feature by utilizing a weld joint recognition network to obtain target features of the workpiece to be welded; and obtaining the category of the welding seam and the spatial information of the welding seam in the workpiece to be welded based on the target characteristics by utilizing the welding seam identification network. Through the mode, the welding seam identification accuracy can be improved.

Description

Weld joint identification method, weld joint identification network training method, equipment and storage medium
Technical Field
The application relates to the technical field of automatic welding, in particular to a welding seam identification method, a welding seam identification network training method, equipment and a storage medium.
Background
With the continuous increase of market demands and urgent demands on humanized working environments, the traditional manual welding cannot meet the current market, and the intelligent robot just solves the problems of low efficiency, poor quality and high cost of the traditional manual welding technology. The intelligent robot welding technology has the main premise of accurately identifying the welding seam of a welded workpiece, however, the welding seam identification difficulty is higher due to the characteristics of slender welding seam, similar shape and large depth difference, and the accuracy of the identification result is low.
Disclosure of Invention
The technical problem that this application mainly solves is to provide a welding seam identification method, welding seam identification network training method, equipment and storage medium, can improve welding seam identification rate of accuracy.
In order to solve the technical problem, a first aspect of the present application provides a method for identifying a weld, which includes: acquiring a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information; extracting features of the color image and the reference image respectively to obtain initial features and reference features of a workpiece to be welded; performing feature fusion based on the initial feature and the reference feature to obtain a target feature of the workpiece to be welded; and obtaining the category of the welding seam in the workpiece to be welded and the spatial information of the welding seam based on the target characteristics.
The method for extracting the characteristics of the workpiece to be welded comprises the steps of: performing a plurality of levels of feature extraction on the color image and the reference image respectively; the input features extracted from the current-stage features are output features extracted from the previous-stage features; and obtaining initial characteristics and reference characteristics by utilizing the output characteristics obtained by the final-stage characteristic extraction.
The method for obtaining the target characteristics of the workpiece to be welded comprises the following steps of: obtaining target features corresponding to current-stage feature extraction based on the output features obtained by current-stage feature extraction and the fusion features of the current stage; the fusion features of the current stage are the corresponding target features extracted from the features of the next stage of the current stage, and the fusion features of the last stage are the attention features obtained by performing attention processing on the initial features and the reference features.
The method for extracting the target feature corresponding to the current level feature based on the output feature extracted from the current level feature and the fusion feature of the current level comprises the following steps: performing feature extraction based on a first output feature obtained by performing current-stage feature extraction on the color image and a fusion feature of the current stage to obtain a first feature and a first confidence; performing feature extraction based on a second output feature obtained by performing current-stage feature extraction on the reference image and the fusion feature of the current stage to obtain a second feature and a second confidence; obtaining a first weight and a second weight based on the first confidence and the second confidence; and respectively weighting the first feature and the second feature by using the first weight and the second weight to obtain a target feature corresponding to the current-stage feature extraction.
Based on the target characteristics, obtaining the category of the welding seam and the space information of the welding seam in the workpiece to be welded, wherein the method comprises the following steps: performing weld recognition based on the target features to obtain the type of the weld and mask data; and obtaining the spatial information of the welding seam based on the mask data and the point cloud data of the workpiece to be welded.
The mask data comprises a plurality of mask sub-data representing different weld joint areas; the weld space information includes at least one of endpoint information of the weld, a size of the weld, and an angle of the weld.
The method for obtaining the spatial information of the welding line based on the mask data and the point cloud data of the workpiece to be welded comprises the following steps: for each weld joint region, fitting to obtain the minimum circumscribed rectangle of the weld joint region based on mask sub-data corresponding to the weld joint region; determining at least one of a two-dimensional coordinate of an end point of the weld belonging to the weld region in the color image and a size of the weld using a position of the minimum bounding rectangle; and determining the endpoint information of the welding seam and the angle of the welding seam based on the position of the minimum circumscribed rectangle, the two-dimensional coordinates of the endpoint of the welding seam and the point cloud data.
Wherein the reference image comprises at least one of a depth image, a thermal image, an X-ray map.
In order to solve the technical problem, a second aspect of the present application provides a weld joint recognition network training method, which includes: acquiring a sample color image and a sample reference image of a workpiece to be welded; wherein the sample color image and the sample reference image characterize different modality information; respectively extracting features of the sample color image and the sample reference image by utilizing a weld joint recognition network to obtain sample initial features and sample reference features of a workpiece to be welded; performing feature fusion based on the initial features of the sample and the reference features of the sample to obtain target features of the sample; respectively carrying out semantic segmentation and instance segmentation on sample target features to obtain sample types and sample mask data of welding seams in workpieces to be welded; calculating a first loss based on the sample type and the annotation type, and calculating a second loss based on the sample mask data and the annotation mask data; based on the first loss and the second loss, network parameters of the weld identification network are adjusted.
To solve the above technical problem, a third aspect of the present application provides an electronic device, which includes a memory and a processor that are coupled to each other, where the memory stores program instructions; the processor is configured to execute program instructions stored in the memory to implement the method provided in the first aspect.
To solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium for storing program instructions that can be executed to implement the method provided in the first aspect.
The beneficial effects of this application are: different from the condition of the prior art, the method acquires a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information; extracting features of the color image and the reference image respectively to obtain initial features and reference features of a workpiece to be welded; performing feature fusion based on the initial feature and the reference feature to obtain a target feature of the workpiece to be welded; and obtaining the category of the welding seam in the workpiece to be welded and the spatial information of the welding seam based on the target characteristics. By combining the reference image and the color image, the features of the two modal images are fused, so that the obtained target features of the workpiece to be welded contain more information, and further, the category of the welding seam with higher accuracy and the spatial information of the welding seam can be obtained based on the target features, and the accuracy of the recognition of the welding seam can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a weld identification result provided herein;
FIG. 2 is a schematic diagram of an embodiment of a weld identification network provided herein;
FIG. 3 is a schematic diagram of an embodiment of acquiring target features corresponding to current level feature extraction;
FIG. 4 is a flow chart of an embodiment of a method for training a weld identification network provided herein;
FIG. 5 is a schematic diagram of a frame structure of an embodiment of an electronic device provided herein;
FIG. 6 is a schematic diagram of a framework of an embodiment of a computer readable storage medium provided herein.
Detailed Description
The following description of the embodiments of the present application, taken in conjunction with the accompanying drawings, will clearly and fully describe the embodiments of the present application, and it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that, in the embodiments of the present application, there is a description of "first", "second", etc., which are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1-3 in combination, fig. 1 is a schematic flow chart of an embodiment of a weld seam identification result provided in the present application, fig. 2 is a schematic diagram of an embodiment of a weld seam identification network provided in the present application, and fig. 3 is a schematic diagram of an embodiment of obtaining a target feature corresponding to a current level feature extraction in the present application, where the method includes:
s11: acquiring a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information.
In an embodiment, the color image is an RGB image, and the reference image may include at least one of a depth image, a thermal image, and an X-ray. The color image and the reference image can be acquired by a camera or a sensor and then sent to a welding seam recognition network. The color image and the reference image may be acquired under the same acquisition condition, and the same acquisition condition may be the same acquisition position, acquisition angle, and the like.
S12: and respectively extracting the characteristics of the color image and the reference image by utilizing a weld joint identification network to obtain the initial characteristics and the reference characteristics of the workpiece to be welded.
In one embodiment, the color image and the reference image may be feature extracted using a feature extraction network, which may comprise a ResNet-50 model. In particular, the feature extraction network may comprise a plurality of successive feature extraction layers, each of which may extract features of different resolutions, the resolution decreasing from layer to layer in the order of feature extraction. The feature extraction network can respectively perform a plurality of levels of feature extraction on the color image and the reference image by using a plurality of continuous feature extraction layers, each feature extraction layer is used for executing one level of feature extraction, the input features of the current level of feature extraction layer are output features obtained by the previous level of feature extraction layer, and the initial features and the reference features are obtained by using the output features obtained by the last level of feature extraction.
In a specific embodiment, as shown in fig. 2, the feature extraction network includes four feature extraction layers (layer 1, layer 2, layer 3 and layer 4), and according to the order of feature extraction, the four feature extraction layers respectively obtain features with the resolution of 1/4, 1/8, 1/16 and 1/32 of the resolution of the color image or the reference image, the color image is input into the feature extraction network, the feature extraction network outputs initial features, and the initial features are RGB features; the reference image is input into a feature extraction network, which outputs a reference feature, e.g., a depth image, which is then a depth feature.
S13: and carrying out feature fusion by utilizing a weld joint recognition network based on the initial feature and the reference feature to obtain the target feature of the workpiece to be welded.
In an embodiment, the initial feature and the reference feature may be fused, so that the target feature of the workpiece to be welded may be obtained, that is, only the output feature of the last feature extraction layer of the feature extraction network is fused.
In another embodiment, the output features of each feature extraction layer of the feature extraction network may be fused to obtain the target feature corresponding to each feature extraction layer. Specifically, based on the output characteristics obtained by extracting the current-stage characteristics and the fusion characteristics of the current stage, obtaining target characteristics corresponding to the current-stage characteristic extraction; the fusion features of the current stage are the corresponding target features extracted from the features of the next stage of the current stage, and the fusion features of the last stage are the attention features obtained by performing attention processing on the initial features and the reference features.
In a specific embodiment, as shown in fig. 3, based on the output feature extracted from the current level feature and the fusion feature of the current level, obtaining the target feature corresponding to the current level feature extraction includes: performing feature extraction based on a first output feature obtained by performing current-stage feature extraction on the color image and a fusion feature of the current stage to obtain a first feature and a first confidence; performing feature extraction based on a second output feature obtained by performing current-stage feature extraction on the reference image and the fusion feature of the current stage to obtain a second feature and a second confidence; obtaining a first weight and a second weight based on the first confidence coefficient and the second confidence coefficient by using an activation function, wherein the activation function can be a softmax function; multiplying the first weight with the first feature to obtain a third feature, multiplying the second weight with the second feature to obtain a fourth feature, performing weighting treatment, and summing the third feature and the fourth feature to obtain a target feature corresponding to the current-stage feature extraction.
Referring to fig. 2 and 3 in combination, in a specific embodiment, four feature extraction layers are used to perform feature extraction on a color image to obtain four first output features with resolution of 1/4, 1/8, 1/16 and 1/32 of the resolution of the color image, the first output features with resolution of 1/32 are initial features, four feature extraction layers are used to perform feature extraction on a reference image to obtain four second output features with resolution of 1/4, 1/8, 1/16 and 1/32 of the resolution of the reference image, the second output features with resolution of 1/32 are reference features, an attention layer is used to perform attention processing on the reference features and the initial features to obtain attention features, and the attention features are used as fusion features corresponding to the last feature extraction layer, namely the attention features are used as fusion features of the last stage. Connecting the first output feature of the last stage with the fusion feature of the last stage, namely connecting the initial feature with the fusion feature of the last stage, and extracting the features connected together to obtain a first feature and a first confidence; connecting the second output feature of the last stage with the fusion feature of the last stage, namely connecting the reference feature with the fusion feature of the last stage, and extracting the features connected together to obtain a second feature and a second confidence; obtaining a first weight and a second weight based on the first confidence and the second confidence; and respectively weighting the first feature and the second feature by using the first weight and the second weight to obtain a target feature corresponding to the final-stage feature extraction.
Taking the target feature corresponding to the final level of feature extraction as a fusion feature of the third level of feature extraction, connecting the first output feature obtained by the third level of feature extraction with the fusion feature extracted by the third level of feature extraction, namely connecting the first fusion feature with the resolution of 1/16 with the fusion feature extracted by the third level of feature extraction, and carrying out feature extraction on the connected features to obtain a first feature and a first confidence coefficient; connecting the second output feature obtained by extracting the third-level feature with the fusion feature extracted by the third-level feature, namely connecting the second fusion feature with the resolution of 1/16 with the fusion feature extracted by the third-level feature, and extracting the features connected together to obtain a second feature and a second confidence; obtaining a first weight and a second weight based on the first confidence and the second confidence; and respectively weighting the first feature and the second feature by using the first weight and the second weight to obtain a third-level feature extraction corresponding target feature.
And similarly, obtaining target features corresponding to the second-stage feature extraction and target features corresponding to the first-stage feature extraction in the same way. Through the mode, the target feature corresponding to each feature extraction layer can be obtained. In a specific embodiment, each level of feature extraction corresponds to a target feature, which can be obtained by the following formula.
Wherein,representing the i-th level feature extraction corresponding target feature, < >>Representing the first confidence level of the i-th level feature extraction,/for>Representing a second confidence level of the i-th level feature extraction,/for>Representing the first feature extracted from the i-th level of features, < > and>representing the second feature extracted from the i-th level feature.
In another embodiment, when the fusion feature of the current stage and the output feature of the current stage are connected, the fusion feature of the current stage may be linearly interpolated up-sampled to have the same resolution as the output feature of the current stage and then connected with the output feature of the current stage.
S14: and obtaining the category of the welding seam and the spatial information of the welding seam in the workpiece to be welded based on the target characteristics by utilizing the welding seam identification network.
In an embodiment, after obtaining the target feature corresponding to each feature extraction layer, performing weld seam recognition based on the target feature corresponding to each feature extraction layer, specifically, inputting the target feature corresponding to each feature extraction layer into a segmentation network, so that the segmentation network performs instance segmentation and semantic segmentation based on the target feature corresponding to each feature extraction layer to obtain an instance segmentation result and a semantic segmentation result, wherein the instance segmentation result comprises mask data, and the semantic segmentation result comprises the type of the weld seam. And obtaining the spatial information of the welding seam based on the mask data and the point cloud data of the workpiece to be welded. The segmentation network can adopt Mask R-CNN, polarMask, YOLACT, mask2Fome and the like, and the point cloud data of the workpiece to be welded can be obtained by using a camera.
In an embodiment, the mask data may include a plurality of mask sub-data characterizing different weld areas in the color image or the reference image, and obtaining the spatial information of the weld based on the mask data and the point cloud data of the workpiece to be welded includes: for each weld joint region, fitting to obtain the minimum circumscribed rectangle of the weld joint region based on mask sub-data corresponding to the weld joint region; determining at least one of a two-dimensional coordinate of an end point of the weld belonging to the weld region in the color image and a size of the weld using a position of the minimum bounding rectangle; and determining the endpoint information of the welding seam and the angle of the welding seam based on the position of the minimum circumscribed rectangle, the two-dimensional coordinates of the endpoint of the welding seam and the point cloud data.
Specifically, taking an example that one welding line area contains one target welding line, for each welding line area, after the minimum circumscribed rectangle of the welding line area is obtained by using mask sub-data corresponding to the welding line area, coordinates of four corner points of the minimum circumscribed rectangle can be obtained, a narrow side of the minimum circumscribed rectangle can be determined through the coordinates of the four corner points, coordinates of a middle point of the narrow side are obtained as two-dimensional coordinates of an endpoint of a target welding line belonging to the welding line area, namely, the endpoint of the target welding line in a first image can be known, then, the width of the narrow side is obtained as the width of the target welding line belonging to the welding line area, and the length of the target welding line can be obtained according to the endpoint of the target welding line. It will be appreciated that the endpoints herein are two points at either end of the target weld, i.e., the number of endpoints is 2. Acquiring a region of interest (ROI) in the RGB image according to coordinates of four corner points of the minimum bounding rectangle, wherein the ROI region can comprise the minimum bounding rectangle or can be a region where the minimum bounding rectangle is located, and determining a 3D ROI region in the point cloud data based on the ROI region and a corresponding relation, wherein the corresponding relation is a corresponding relation between each pixel point in the RGB image and each point in the point cloud data; the nan value (representing an undefined or unrepresentable value) and discrete point in the 3D ROI area are deleted. According to the two-dimensional coordinates and the corresponding relation of the end points of the target weld joint belonging to the weld joint region, a 3D end point is found, a weld joint plane is fitted based on the 3D ROI region, the weld joint plane contains the 3D ROI region, the 3D end point is projected onto the weld joint plane, the projection point of the 3D end point on the weld joint plane is taken as the final 3D end point of the target weld joint, and the coordinates of the final 3D end point are the three-dimensional coordinates of the end points of the target weld joint. Fitting the normal vector of the weld plane, determining the z-axis direction of the three-dimensional coordinate system, determining the x-axis direction according to the two final 3D endpoints, carrying out cross multiplication on the x-axis direction and the z-axis direction to obtain the y-axis direction, and determining the three-dimensional coordinate system to obtain the angle of the target weld.
The method comprises the steps of obtaining a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information; extracting features of the color image and the reference image respectively to obtain initial features and reference features of a workpiece to be welded; performing feature fusion based on the initial feature and the reference feature to obtain a target feature of the workpiece to be welded; and obtaining the category of the welding seam in the workpiece to be welded and the spatial information of the welding seam based on the target characteristics. By combining the reference image and the color image, the features of the two modal images are fused, so that the obtained target features of the workpiece to be welded contain more information, and further, the category of the welding seam with higher accuracy and the spatial information of the welding seam can be obtained based on the target features, and the accuracy of the recognition of the welding seam can be improved.
In an embodiment, the method for recognizing the welding seam provided by the application can be executed by a welding seam recognition network, the welding seam recognition network can comprise a feature extraction network, a fusion module and a segmentation network, the feature extraction network is used for extracting features of a color image and a reference image respectively to obtain initial features and reference features of a workpiece to be welded, and the fusion module is used for carrying out feature fusion based on the initial features and the reference features to obtain target features of the workpiece to be welded; the segmentation network is used for obtaining the category of the welding seam and the spatial information of the welding seam in the workpiece to be welded based on the target characteristics.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of a method for training a weld recognition network according to the present application, where the method includes:
s41: acquiring a sample color image and a sample reference image of a workpiece to be welded; wherein the sample color image and the sample reference image characterize different modality information.
The sample color image and the sample reference image are acquired in the same mode as the color image and the reference image, and the sample color image and the sample reference image are acquired by a camera or a sensor and then sent to a weld joint recognition network.
S42: and respectively extracting the characteristics of the sample color image and the sample reference image by utilizing a weld joint identification network to obtain the initial characteristics and the reference characteristics of the sample of the workpiece to be welded.
In an embodiment, after the weld seam recognition network receives the sample color image and the sample reference image, the feature extraction network performs feature extraction on the sample color image and the sample reference image respectively, where the feature extraction network may include a plurality of feature extraction layers, an input feature of a subsequent feature extraction layer is an output feature of a previous feature extraction layer, and an output feature of a last feature extraction layer is a sample initial feature or a sample reference feature according to a sequence of feature extraction.
S43: and carrying out feature fusion by utilizing a weld joint recognition network based on the initial features of the sample and the reference features of the sample to obtain target features of the sample.
In an embodiment, the fusion operation is performed on the features output by each feature extraction layer, and the fusion process is performed in the reverse order of feature extraction, that is, the output features of the last feature extraction layer are fused first, the output features of the previous feature extraction layer of the last feature extraction layer are fused, and so on, and the output features of all feature extraction layers are fused.
In a specific embodiment, performing feature fusion based on the initial features of the sample and the reference features of the sample, and obtaining the target features of the sample includes: and performing attention processing on the initial sample characteristic and the reference sample characteristic to obtain a sample attention characteristic, and taking the sample attention characteristic as a fusion characteristic corresponding to the last characteristic extraction layer. Connecting a first sample output feature obtained by extracting features of a sample color image by a last feature extraction layer with a fusion feature corresponding to the last feature extraction layer, and extracting features obtained by connecting the features together to obtain a first sample feature and a first sample confidence; connecting a second sample output feature obtained by carrying out feature extraction on the sample reference image by the last feature extraction layer with a fusion feature corresponding to the last feature extraction layer, carrying out feature extraction on the features obtained by connecting the features together to obtain a second sample feature and a second sample confidence, and obtaining a first sample weight and a second sample weight based on the first sample confidence and the second sample confidence; and respectively weighting the first sample feature and the second sample feature by using the first sample weight and the second sample weight to obtain a sample target feature corresponding to the last feature extraction layer.
Taking the sample target feature corresponding to the last feature extraction layer as the fusion feature corresponding to the previous feature extraction layer of the last feature extraction layer, adopting the same step as that of obtaining the sample target feature corresponding to the last feature extraction layer to obtain the sample target feature corresponding to the previous feature extraction layer of the last feature extraction layer, and the like to obtain the sample target features corresponding to all the feature extraction layers.
S44: respectively carrying out semantic segmentation on sample target features by using a weld recognition network to obtain sample types of weld joints in workpieces to be welded; and performing instance segmentation on the sample target features to obtain sample mask data.
In one embodiment, after obtaining sample target features corresponding to each feature extraction layer, performing semantic segmentation based on the sample target features corresponding to each feature extraction layer by using a segmentation network in a weld recognition network to obtain sample types of welds in workpieces to be welded, and performing instance segmentation based on the sample target features corresponding to each feature extraction layer to obtain sample mask data.
S45: the first penalty is calculated based on the sample type and the annotation type, and the second penalty is calculated based on the sample mask data and the annotation mask data.
S46: based on the first loss and the second loss, network parameters of the weld identification network are adjusted.
In an embodiment, the first loss may be a cross entropy loss, and the first loss and the second loss may be directly summed to obtain a total loss, and the network parameters of the weld identification network are adjusted using the total loss.
In another embodiment, the first loss and the second loss may be weighted separately, i.e., the first loss and the second loss are multiplied by the corresponding weights, and summed to obtain the total loss.
In one embodiment, the calculation of the total loss can be expressed by the following formula.
Wherein,indicating total loss->A first loss is indicated and a second loss is indicated,representing a second penalty, N represents the number of predicted mask regions, p, in the sample mask data obtained by the weld identification network σ(j) Representing the probability that the jth prediction mask region belongs to K types,/for the j>Representing the annotation type, m, of the annotation mask region corresponding to the jth prediction mask region σ(j) Binary mask representing jth predictive mask area in sample mask data,/for the sample mask data>And a mark binary mask representing a mark mask region corresponding to the jth predictive mask region in the mark mask data.
By training the weld seam identification network, the weld seam identification capability of the weld seam identification network can be improved.
Referring to fig. 5, fig. 5 is a schematic frame structure of an embodiment of an electronic device provided in the present application.
The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, the memory 51 storing program instructions, the processor 52 being adapted to execute the program instructions stored in the memory 51 to carry out the steps of any of the method embodiments described above. In one particular implementation scenario, electronic device 50 may include, but is not limited to: the microcomputer and the server, and the electronic device 50 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the method embodiments described above. The processor 52 may also be referred to as a CPU (Central Processing Unit ). The processor 52 may be an integrated circuit chip having signal processing capabilities. Processor 52 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of a computer readable storage medium provided in the present application.
The computer readable storage medium 60 stores program instructions 61 for implementing the steps of any of the method embodiments described above when the program instructions 61 are executed by a processor.
The computer readable storage medium 60 may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store a computer program, or may be a server storing the computer program, which may send the stored computer program to another device for execution, or may also run the stored computer program itself.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information, and obtains independent consent of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (10)

1. A weld identification method, comprising:
acquiring a color image and a reference image of a workpiece to be welded; wherein the color image and the reference image characterize different modality information;
respectively extracting features of the color image and the reference image by utilizing a weld joint identification network to obtain initial features and reference features of the workpiece to be welded;
performing feature fusion based on the initial feature and the reference feature by using the weld joint recognition network to obtain target features of the workpiece to be welded;
and obtaining the category of the welding seam in the workpiece to be welded and the spatial information of the welding seam based on the target characteristics by utilizing the welding seam identification network.
2. The method according to claim 1, wherein the feature extraction of the color image and the reference image to obtain the initial feature and the reference feature of the workpiece to be welded includes:
performing a plurality of levels of feature extraction on the color image and the reference image respectively; the input features extracted from the current-stage features are output features extracted from the previous-stage features;
and obtaining the initial characteristic and the reference characteristic by utilizing the output characteristic obtained by the final-stage characteristic extraction.
3. The method according to claim 2, wherein the feature fusion based on the initial feature and the reference feature, to obtain the target feature of the workpiece to be welded, comprises:
obtaining a target feature corresponding to the current-stage feature extraction based on the output feature obtained by the current-stage feature extraction and the fusion feature of the current stage; the fusion feature of the current stage is the target feature corresponding to the extraction of the next stage feature of the current stage, and the fusion feature of the last stage is the attention feature obtained by performing attention processing on the initial feature and the reference feature.
4. The method according to claim 3, wherein the obtaining the target feature corresponding to the current level feature extraction based on the output feature obtained by the current level feature extraction and the fusion feature of the current level includes:
performing feature extraction based on a first output feature obtained by performing current-stage feature extraction on the color image and the fusion feature of the current stage to obtain a first feature and a first confidence;
performing feature extraction based on a second output feature obtained by performing current-stage feature extraction on the reference image and the fusion feature of the current stage to obtain a second feature and a second confidence coefficient;
obtaining a first weight and a second weight based on the first confidence and the second confidence;
and respectively weighting the first feature and the second feature by using the first weight and the second weight to obtain the corresponding target feature extracted from the current-stage feature.
5. The method of claim 1, wherein the obtaining the category of the weld in the workpiece to be welded and the spatial information of the weld based on the target feature comprises:
performing weld recognition based on the target features to obtain the type of the weld and mask data;
and obtaining the spatial information of the welding seam based on the mask data and the point cloud data of the workpiece to be welded.
6. The method of claim 5, wherein the mask data includes a number of mask sub-data characterizing different weld areas; the weld space information comprises at least one of endpoint information of the weld, size of the weld and angle of the weld;
the obtaining spatial information of the welding seam based on the mask data and the point cloud data of the workpiece to be welded includes:
for each weld joint region, fitting to obtain a minimum circumscribed rectangle of the weld joint region based on the mask sub-data corresponding to the weld joint region;
determining at least one of a two-dimensional coordinate of an end point of the weld belonging to the weld region in the color image and a size of the weld using a position of the minimum bounding rectangle;
and determining the endpoint information of the welding seam and the angle of the welding seam based on the position of the minimum circumscribed rectangle, the two-dimensional coordinates of the endpoint of the welding seam and the point cloud data.
7. The method of claim 1, wherein the reference image comprises at least one of a depth image, a thermal imaging, an X-ray map.
8. A weld identification network training method, comprising:
acquiring a sample color image and a sample reference image of a workpiece to be welded; wherein the sample color image and the sample reference image characterize different modality information;
respectively extracting features of the sample color image and the sample reference image by utilizing a weld joint identification network to obtain sample initial features and sample reference features of the workpiece to be welded;
performing feature fusion by utilizing the weld joint recognition network based on the sample initial feature and the sample reference feature to obtain a sample target feature;
respectively carrying out semantic segmentation on the sample target features by using a weld recognition network to obtain sample types of the weld in the workpiece to be welded; performing instance segmentation on the sample target features to obtain sample mask data;
calculating a first loss based on the sample type and the annotation type, and calculating a second loss based on the sample mask data and the annotation mask data;
based on the first loss and the second loss, network parameters of the weld identification network are adjusted.
9. An electronic device comprising a memory and a processor coupled to each other,
the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to implement the method of any one of claims 1-8.
10. A computer readable storage medium for storing program instructions executable to implement the method of any one of claims 1-8.
CN202311458720.8A 2023-11-02 2023-11-02 Weld joint identification method, weld joint identification network training method, equipment and storage medium Pending CN117636109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311458720.8A CN117636109A (en) 2023-11-02 2023-11-02 Weld joint identification method, weld joint identification network training method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311458720.8A CN117636109A (en) 2023-11-02 2023-11-02 Weld joint identification method, weld joint identification network training method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117636109A true CN117636109A (en) 2024-03-01

Family

ID=90022534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311458720.8A Pending CN117636109A (en) 2023-11-02 2023-11-02 Weld joint identification method, weld joint identification network training method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117636109A (en)

Similar Documents

Publication Publication Date Title
CN109508681B (en) Method and device for generating human body key point detection model
CN107808111B (en) Method and apparatus for pedestrian detection and attitude estimation
EP1329850B1 (en) Apparatus, program and method for detecting both stationary objects and moving objects in an image
US10043313B2 (en) Information processing apparatus, information processing method, information processing system, and storage medium
JP7246104B2 (en) License plate identification method based on text line identification
WO2021218786A1 (en) Data processing system, object detection method and apparatus thereof
CN108986152B (en) Foreign matter detection method and device based on difference image
CN113609896A (en) Object-level remote sensing change detection method and system based on dual-correlation attention
CN103985133A (en) Search method and system for optimal splicing lines among images based on graph-cut energy optimization
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
WO2023142602A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN112784750B (en) Fast video object segmentation method and device based on pixel and region feature matching
CN111914756A (en) Video data processing method and device
CN112163995A (en) Splicing generation method and device for oversized aerial photographing strip images
CN115578616A (en) Training method, segmentation method and device of multi-scale object instance segmentation model
CN116246119A (en) 3D target detection method, electronic device and storage medium
Abdulwahab et al. Monocular depth map estimation based on a multi-scale deep architecture and curvilinear saliency feature boosting
JP2005071344A (en) Image processing method, image processor and recording medium recording image processing program
US8126275B2 (en) Interest point detection
CN113436239A (en) Monocular image three-dimensional target detection method based on depth information estimation
CN111709269A (en) Human hand segmentation method and device based on two-dimensional joint information in depth image
CN112508996A (en) Target tracking method and device for anchor-free twin network corner generation
CN117636109A (en) Weld joint identification method, weld joint identification network training method, equipment and storage medium
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
TWI796952B (en) Object detection device and object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination