CN111932539A - Molten pool image and depth residual error network-based height and penetration collaborative prediction method - Google Patents

Molten pool image and depth residual error network-based height and penetration collaborative prediction method Download PDF

Info

Publication number
CN111932539A
CN111932539A CN202011090573.XA CN202011090573A CN111932539A CN 111932539 A CN111932539 A CN 111932539A CN 202011090573 A CN202011090573 A CN 202011090573A CN 111932539 A CN111932539 A CN 111932539A
Authority
CN
China
Prior art keywords
network
molten pool
image
depth
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011090573.XA
Other languages
Chinese (zh)
Other versions
CN111932539B (en
Inventor
赵壮
韩静
陆骏
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhipu Photoelectric Technology Co ltd
Original Assignee
Nanjing Zhipu Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhipu Photoelectric Technology Co ltd filed Critical Nanjing Zhipu Photoelectric Technology Co ltd
Priority to CN202011090573.XA priority Critical patent/CN111932539B/en
Publication of CN111932539A publication Critical patent/CN111932539A/en
Application granted granted Critical
Publication of CN111932539B publication Critical patent/CN111932539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for cooperatively predicting residual height and weld penetration based on a molten pool image and a depth residual error network, which belongs to the technical field of image analysis, accurately predicts the trend of the weld penetration and the residual height of future development of a weld joint and improves the welding quality, and comprises the following steps: 1. the method comprises the steps of restoring the change conditions of the residual height and the penetration depth on the length of a welding seam, 2, processing an image frame, 3, determining a basic network, 4, determining a network input end, 5, determining a network output end, 6, evaluating the network learning capacity, and 7, predicting the residual height and the penetration depth network. The method is based on the cooperative prediction of the fusion depth and the weld reinforcement of the molten pool image and the depth residual error network, can not only clearly observe the edge and the internal details of the molten pool, monitor the changes of the fusion depth and the weld reinforcement in the forming process of the weld joint in real time, but also accurately predict the trend of the fusion depth and the weld reinforcement of the future development of the weld joint, is suitable for different welding parameters and different welding parts, and regulates and controls the welding quality in real time.

Description

Molten pool image and depth residual error network-based height and penetration collaborative prediction method
Technical Field
The invention relates to a method for cooperatively predicting residual height and fusion depth based on a molten pool image and a depth residual error network, and belongs to the technical field of image analysis.
Background
The image characteristics of the molten pool and the changes of the penetration depth and the residual height of the welding seam play a decisive role in the quality of the welding. On the cross section of the welding seam, the melting depth of the base metal is called as the penetration depth, the penetration depth directly determines the bonding strength between the welding seam and the base metal and determines the bearing capacity of the welding seam to a great extent; the maximum height of the part of the welding seam which exceeds the surface of the workpiece is called the residual height, namely the distance from the top of the welding seam to the connecting line of the two welding toes, the cross section of the welding seam is increased by the residual height of the welding seam, and the static load bearing capacity is improved. However, the extra height also causes stress concentration at the weld toe, and the ability of the weld to withstand dynamic loads is reduced. For a weldment bearing dynamic load or fatigue, the residual height of the butt weld should be ground flat, and the residual height of the fillet weld should be ground into a concave shape.
Therefore, the weld pool profile characteristics of the weld seam have very important influence on the welding quality of the weld seam, and the penetration and the residual height of the weld seam also have very important function on the quality control of the weld seam. The weld penetration and the weld reinforcement are correctly identified and analyzed, and the weld penetration and the weld reinforcement are correctly predicted to play an important role in guiding the welding quality.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for cooperatively predicting the residual height and the fusion depth based on a molten pool image and a depth residual error network, which has the following specific technical scheme:
the height and penetration collaborative prediction method based on the molten pool image and the depth residual error network comprises the following steps:
the method comprises the following steps: reducing the change conditions of the residual height and the penetration on the length of the welding seam: cutting the welding line into two uniform halves along the length direction, polishing and corroding one half of the welding line, observing the conditions of the weld penetration and the weld reinforcement through an electron microscope, and identifying the weld reinforcement and the weld penetration;
step two: an image processing framework: learning deep features of the molten pool image through a network, and establishing a relation between the molten pool image and the residual height and the fusion depth;
step three: determining a basic network: in order to more accurately predict the change of the weld reinforcement and the weld penetration in the welding process, the input end and the output end of the network, the layer number of the network and the generalization capability of the network need to be evaluated;
step four: determining a network input end: performing Fourier transform on the molten pool original image or the molten pool original image, inputting the molten pool original image or the molten pool original image, comparing the regression effect after the spatial domain image and the frequency domain image of the molten pool are used as input ends, and selecting the frequency domain image of the molten pool image as a network input end;
step five: determining a network output: because of two output values of the excess height and the fusion depth, the output end of the network frame has a single output mode of the fusion depth or the excess height and a double output mode containing the fusion depth and the excess height, and after the input end of the network is determined, the double output mode containing the excess height and the fusion depth is selected as the output end of the network by comparing regression results of the two output modes;
step six: evaluating the network learning ability: the generalization ability of the network is an important index for evaluating the performance of the network, molten pool images of a plurality of welding seams are selected as a test set and a training set respectively, and the generalization ability of the network is evaluated through a regression result of the test set;
step seven: and (3) predicting the surplus height and the penetration depth network: manufacturing a test workpiece, testing the residual height of a welding seam from an arc starting end to an arc ending end of the test workpiece, taking data at two ends of the welding seam as a plurality of groups of training sets, taking data in the middle of the welding seam as a plurality of groups of test sets, and predicting the trend of the residual height by judging the average error of regression results of the test sets.
Further, the image processing framework adopts a depth residual error network structure, and the depth residual error network structure adopts a residual error block as a basic composition unit.
Further, the network input end adopts a frequency domain diagram of a rear image in the molten pool as the network input end.
Further, the Resnet-34 is used as an infrastructure of a characteristic extraction part of the molten pool image.
The invention has the beneficial effects that:
the method is based on the cooperative prediction of the fusion depth and the weld reinforcement of the molten pool image and the depth residual error network, can not only clearly observe the edge and the internal details of the molten pool, monitor the changes of the fusion depth and the weld reinforcement in the forming process of the weld joint in real time, but also accurately predict the trend of the fusion depth and the weld reinforcement of the future development of the weld joint, is suitable for different welding parameters and different welding parts, and regulates and controls the welding quality in real time.
Drawings
Figure 1 is a schematic flow diagram of the present invention,
FIG. 2 is a schematic view of the residual height and penetration monitoring apparatus of the present invention,
FIG. 3 is a diagram showing the variation law of the molten pool of the present invention within a single CMT period,
figure 4 is a metallographic view of a weld of the invention,
figure 5 is a weld profile of the present invention,
FIG. 6 is a molten pool shape diagram corresponding to the variation of the residual height and the penetration depth of the present invention,
figure 7 is a network characteristic map of the present invention,
FIG. 8 is a cut-away view of different ROI areas of the molten pool in accordance with the present invention,
FIG. 9 is a diagram illustrating the original weld puddle image residual height regression result and error of the present invention,
FIG. 10 is a graphical illustration of the regression results and errors of the back residual height in the puddle of the present invention,
FIG. 11 is a graphical representation of the regression results and errors for the puddle front puddle of the present invention,
figure 12 is a schematic view of the shape of a workpiece of the present invention,
figure 13 is a graph of weld reinforcement variation at 120A of the present invention,
figure 14 is a graph of the residual height regression results and errors for 120A of the present invention,
figure 15 is a graph of weld reinforcement variation at 130A of the present invention,
figure 16 is a graph illustrating the residual regression results and errors for 130A of the present invention,
in the figure: 1-mother board, 2-color camera, 3-welding gun, 4-laser, 5-black and white camera, 6-workbench, 7-molten pool, and 8-welding seam.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic drawings and illustrate the present invention only in a schematic manner, and thus show only the constitution related to the present invention.
As shown in FIG. 1, the method for cooperatively predicting the residual height and the penetration depth based on the molten pool image and the depth residual error network comprises the following steps:
the method comprises the following steps: reducing the change conditions of the residual height and the penetration on the length of the welding seam: cutting the welding line into two uniform halves along the length direction, polishing and corroding one half of the welding line, observing the conditions of the weld penetration and the weld reinforcement through an electron microscope, and identifying the weld reinforcement and the weld penetration;
step two: an image processing framework: learning deep features of the molten pool image through a network, and establishing a relation between the molten pool image and the residual height and the fusion depth;
step three: determining a basic network: in order to more accurately predict the change of the weld reinforcement and the weld penetration in the welding process, the input end and the output end of the network, the layer number of the network and the generalization capability of the network need to be evaluated;
step four: determining a network input end: performing Fourier transform on the molten pool original image or the molten pool original image, inputting the molten pool original image or the molten pool original image, comparing the regression effect after the spatial domain image and the frequency domain image of the molten pool are used as input ends, and selecting the frequency domain image of the molten pool image as a network input end;
step five: determining a network output: because of two output values of the excess height and the fusion depth, the output end of the network frame has a single output mode of the fusion depth or the excess height and a double output mode containing the fusion depth and the excess height, and after the input end of the network is determined, the double output mode containing the excess height and the fusion depth is selected as the output end of the network by comparing regression results of the two output modes;
step six: evaluating the network learning ability: the generalization ability of the network is an important index for evaluating the performance of the network, molten pool images of a plurality of welding seams are selected as a test set and a training set respectively, and the generalization ability of the network is evaluated through a regression result of the test set;
step seven: and (3) predicting the surplus height and the penetration depth network: manufacturing a test workpiece, testing the residual height of a welding seam from an arc starting end to an arc ending end of the test workpiece, taking data at two ends of the welding seam as a plurality of groups of training sets, taking data in the middle of the welding seam as a plurality of groups of test sets, and predicting the trend of the residual height by judging the average error of regression results of the test sets.
The first embodiment is as follows:
the invention discloses an experimental data acquisition device for excess height and fusion depth, which is a CMT welding experimental platform. The CMT welding experiment platform mainly comprises a welding power supply, a mobile robot and a vision sensor system. The vision system sensing system is arranged on the flat surface of a worktable 6, and the worktable 6 is used for placing the mother board 1 to be welded. The vision sensor system comprises a welding gun 3 fixed on the robot, the welding gun 3 is opposite to the mother board 1, and the robot is also provided with a CCD color camera 2 with the model number of Basler acA640-750 uc. In order to make the collected image of the molten pool 7 correspond to the actual position of the welding seam 8, the laser 4 is used for auxiliary positioning, the laser 4 with the central wavelength of 450nm is used for irradiating the upper edge part of the welding wire, and a CCD black-and-white camera 5 with the model number of Basler ace A1920-155um is arranged for capturing a laser point, as shown in figure 2. In the image acquisition process of the molten pool 7, the FPGA system sends two paths of synchronous signals to control the two cameras to acquire images at the same time. CMT is a special MIG/MAG welding, and the heating mode of a heat source is hot-cold-hot alternation, so that the heating mode effectively reduces the welding heat input and can realize the additive manufacturing and forming of ultrathin parts. The forming size and quality of the single layer and single channel directly determine the forming quality of the additive part, so the method monitors the change of the penetration and the residual height in the forming process of the welding seam 8 by taking the forming control of the stainless steel single channel and single channel based on CMT as the background. It is generally considered that one CMT cycle lasts for 14ms, and fig. 3 shows 14 images of the molten pool 7 acquired by the camera at an acquisition frequency of 1000Hz within one CMT cycle. Starting at 10ms, the acquired image of the weld puddle 7 is completely unaffected by the electric arc, allowing clear edge and internal detail of the weld puddle 7 to be observed. Therefore, in order to reduce the influence of interference, the 10 ms-th image of the melt pool 7 is used uniformly.
In order to achieve the effect of monitoring the residual height and the penetration depth in real time, the change condition of the residual height and the penetration depth on the length of the welding seam needs to be accurately reduced. Therefore, the weld is cut into two uniform halves along the length direction, one half of the weld is ground, corroded and the like, and the penetration and the excess height of the weld are observed through an electron microscope. Fig. 5 is a diagram of extracting a weld seam, and simultaneously, according to a scale in fig. 4 and a reference line at the bottom of a base material, a pixel position of the surface of the base material in an image is calculated, the calculated position is taken as a reference, the length from the position above the reference line to the edge of the upper profile is the extra height of the weld seam, and the length from the position below the reference line to the edge of the lower profile is the penetration of the weld seam, so that formulas of the extra height and the penetration along with the change of the length of the weld seam can be fitted, and simultaneously, experimental equipment is combined, and the extra height and the penetration corresponding to each collected molten pool map can be determined. In the embodiment, the welding speed of the welding machine is set to be 5mm/s, the shielding gas is argon-oxygen mixed gas, the gas flow is 25L/min, the grade of the welding wire is ER316L, the base material is 304 stainless steel, the other welding process parameters are shown in Table 1,
TABLE 1
Figure 334067DEST_PATH_IMAGE001
Fig. 6 (a) is a typical curve of the weld reinforcement height and the penetration depth varying with the length of the weld in the above process, and compared with the middle stable stage of the weld, the height of the arc striking end is higher, the height of the arc extinguishing end is lower, and the arc striking end and the arc extinguishing end are generally not considered in the subsequent research process of the weld joint strength, so that the weld pool image at the position 15-70mm away from the arc striking end of the weld is taken for analysis in the present embodiment. FIG. 6 (b) shows three typical configurations of the melt pool at different distances from the arc starting point. The left diagram is the molten pool shape at the end just separated from the arc striking end, the right diagram is the molten pool shape at the stage of fast forward arc closing, and the middle diagram is the molten pool shape at the stable welding stage. Since the heat dissipation condition of the molten pool in the left figure is the best immediately after the arc striking stage, the spreading capability of the molten pool is the weakest, resulting in the minimum width of the front end of the molten pool. As welding progresses, the heat dissipation conditions become worse and worse, which not only facilitates spreading of the molten pool, resulting in a larger width of the front end of the molten pool, but also affects the rod elongation, the change of the rod elongation affects the position of the arc, which affects the position of the molten pool, and it is the change of these welding characteristics that provides possibility for regression of the penetration and the residual height.
An image processing framework. And (3) learning deep features of the molten pool image through a network, and establishing a relation between the molten pool image and the residual height and the fusion depth. The traditional neural network or the full-connection network has the problems of information loss and loss during information transmission, and simultaneously, the problem of gradient disappearance or gradient explosion can also occur along with the increase of the network depth. Using a residual network structure, i.e. using a residual block as a basic building block, wherein the input isxFor a stacked layer structure, the learned characteristics are set as
Figure 821680DEST_PATH_IMAGE002
And the residual structure is expected to learn the characteristics of
Figure 195724DEST_PATH_IMAGE003
Then, then
Figure 692564DEST_PATH_IMAGE004
It shows that the original learning features are
Figure 342988DEST_PATH_IMAGE005
It can be seen from the formula that directly learning the original features is more complicated than learning the residual errors, and the input information is directly bypassed to be output by adopting the residual error network structure, so that the network does not directly fit the original mapping, but fits the residual error mapping, and the learning difficulty is simplified. When residual errorF(x) When the depth is not less than 0, the accumulation layer can also achieve the effect of the identity mapping, so that the problem of network degradation of the convolutional neural network along with the increase of the depth is solved to a great extent. In the general case of the above-mentioned,F(x)>0, the network will always learn new features based on the input features, thus having better network performance. Image feature extraction by the basic structure of residual moduleA method for monitoring the residual height of a cladding layer based on a depth residual error network is provided. The size of the molten pool after shearing alignment is 640 multiplied by 350, and the original image is scaled to 300 multiplied by 210 by comprehensively considering the training speed and the precision of the network. Table 2 below shows specific information of the network part, in which the basic block includes two convolutional layers, the kernel size is 3 × 3, and Resnet-34 is used as the basic network of the feature extraction part of the molten pool image, and after pooling the output feature map, the first full connection is performed to integrate the local information with category distinction in the previous pooling layer. In order to improve the network performance, a ReLU function is adopted for the excitation function of each neuron of the full-connection layer, and the output value of the full-connection of the last layer is transmitted, namely the residual height. FIG. 7 shows the process of feature map shrinkage due to convolution and average pooling.
TABLE 2
Figure 735924DEST_PATH_IMAGE006
A loss function and an evaluation function. The loss function is used to evaluate the difference between the estimated value and the true value. According to the difference values, the network can continuously adjust the parameters of the network in the training process to find the optimal model parameters, and the loss is reduced to the maximum extent. The mean square error is generally used in a regression model, mainly for predicting specific values, and the calculation formula is shown as (1):
Figure 839009DEST_PATH_IMAGE007
,(1)
in the formula (I), the compound is shown in the specification,
Figure 873961DEST_PATH_IMAGE008
is a predicted value of the number of the frames,
Figure 644471DEST_PATH_IMAGE009
is the true value of the,nis the number of samples and MSE is the mean square error. Then, the regression result of the network is evaluated by using the average absolute error and the error rate, and the calculation formula is (2)Shown in the figure:
Figure 972422DEST_PATH_IMAGE010
,(2)
in the formula (I), the compound is shown in the specification,
Figure 562803DEST_PATH_IMAGE008
is a predicted value of the number of the frames,
Figure 401446DEST_PATH_IMAGE009
is the true value of the,nis the number of samples, MAE is the average absolute error, RE is the error rate.
A base network is determined. In order to more accurately predict the weld reinforcement and the change of the penetration depth in the welding process, the input and output ends of the network, the number of layers of the network and the generalization capability of the network need to be evaluated. Therefore, a plurality of welding seams are welded, and 1512 groups of weld pool image data and corresponding residual height fusion depth are uniformly and randomly selected according to the length of the welding seams. To evaluate the generalization ability of the network, 283 groups of data from the middle to the tail of one weld were used as the test set, and the remaining 1229 groups were used as the training set. In constructing a convolutional network, the higher the depth of the network, the richer the hierarchy of features that can be extracted, so it is generally preferred to use a deeper network structure in order to obtain higher-level features. However, when a deep network structure is used, problems of gradient disappearance, gradient explosion and network degradation still exist, and therefore, it is important to determine the number of layers of the network. The average absolute error and error rate are shown in table 3:
TABLE 3
Figure 760883DEST_PATH_IMAGE011
Compared with Vgg11 and Vgg16, Resnet-34 adds layer jump connection in the network, and directly bypasses input information to output, so that the integrity of the information is protected, the learning target and difficulty are simplified, and the performance of a deep network is ensured not to be weaker than that of a shallow network. Because the molten pool image is composed of the molten pool contour and the molten pool internal details, the scene is not complex, the Resnet-50 layer number is deep, the deep layer number does not account for the simple contour feature, even the overfitting phenomenon is easy to occur, and the regression result of the Resnet-50 to the residual height is the worst. Therefore, Resnet-34 is selected as the network fabric.
A network input is determined. Because it is desirable not to lose the detailed information of the molten pool as much as possible, the selection strategies at the input end of the network are divided into two types: firstly, the molten pool original image is used as input, and secondly, the molten pool original image is used as input after Fourier transformation. The spatial domain diagram and the frequency domain diagram of the molten pool are respectively used as input ends, the network structure is Resnet-34, and the result shows that the average absolute error of the residual height and the fusion depth regression which use the frequency domain diagram of the molten pool as the input end is the lowest. The molten pool image is in a space domain, the molten pool outline surrounds the internal details of the molten pool, and in the convolution process, outline information and the internal detail information nearby the outline information are mixed together, so that the space domain regression effect is poor. After the molten pool image is converted into the frequency domain, the molten pool contour information is high-frequency information, the internal detail information is low-frequency information, the two kinds of information are located at fixed positions of the frequency domain image, and in the convolution process, no mixed mashup of the high-frequency information and the low-frequency information occurs, so that the frequency domain regression effect is good. Therefore, the frequency domain map of the weld pool image is selected as an input end. Due to changes in the welding environment caused by heat build-up, etc., the droplet shape and wire rod elongation change in the puddle image at 10ms of the CMT cycle, which may also affect the regression results. To verify this assumption, as shown in fig. 8, the original drawing of the molten pool, the drawing of the rear part of the molten pool and the drawing of the front part of the molten pool are sequentially arranged from left to right, the resolution is still 300 × 210, under the condition that other conditions are not changed, the regression result of the test set shows that the regression result of the original drawing as the input end is the worst, and for the latter two input modes, under the condition that the melt depth regression result indexes are similar, the residual height regression result error of the input mode of the rear part of the molten pool is lower, because the front part of the molten pool is positioned at the center of the molten pool oscillation, the internal details of the front part of the molten pool are changed violently, and the form of the rear part of the molten pool is. Therefore, a frequency domain map of a rear image in the molten pool is used as an input end of the network.
A network output is determined. After the input of the network is determined, the output of the network is also determined. Because there are two output values of the extra height and the penetration depth, there are two ways at the output end of the network framework, namely, single output of the penetration depth and the extra height and double output containing the two. After the input end is fixed, the regression results of the two output modes are compared. For the residual height regression result, the average absolute error of double output is lower than that of single output, which shows that the fusion depth plays a certain constraint force on the residual height regression. For the fusion depth regression result, the average absolute error of double output is higher than that of single output, which shows that the regression constraint of the residual height to the fusion depth is not obvious, because the fusion depth change condition is more complicated by the comprehensive influence of the surrounding heat dissipation conditions, the parent metal and the like, but the trend of the regression result of double output is more consistent with the fluctuation trend of the actual fusion depth, so the mode of simultaneously outputting the residual height and the fusion depth is adopted as the output end of the network.
And evaluating the network learning ability. After the network architecture is determined, the generalization capability of the network becomes one of the most important indexes for evaluating the network performance. 23 welding seams are welded under the welding parameter number 2 of the table 1, a 15mm-70mm weld pool image in 18 welding seams is randomly selected as a training set, a 12582 group is selected, the remaining 5 welding seams are selected as a test set, a 2916 group is selected, the regression result of the original test set of the weld pool is shown in fig. 9, the regression result of the rear test set in the weld pool is shown in fig. 10, and the regression result of the front test set in the weld pool is shown in fig. 11. The regression result verifies that the method using the frequency domain graph at the rear part in the molten pool image as the input end improves the regression effect. From the predicted residual height trend, the fluctuation trend of the residual height of the fourth welding seam is similar to the actual fluctuation trend, and the residual height trends of partial areas of other welding seams are predicted, so that the network has certain generalization capability. As shown in fig. 10, since the fluctuation of the partial bead height is not predicted by the network, and the regression result appears to fluctuate around a certain average value, the workpiece shown in fig. 12 was fabricated to prove that the value of the height predicted by the network is not the average value of the height of a certain section of bead, and the thickness of the workpiece was gradually changed from 5mm to 0 mm. The welding parameters are the welding parameter number 3 in the table 1, and the arc is actually started from the position with the thickness of 4.5mm to be closed at the position with the thickness of 2mm due to the reason of fixing the steel plate. Because the thickness of the plate is gradually reduced, the residual height of the welding seam is also gradually reduced, and the residual height value predicted by the network is also a gradually reduced trend in theory instead of fluctuating around a certain average value. As shown in FIG. 13, the data at both ends of the weld are used as training sets, 1105 groups are selected, the data in the middle of the weld are used as test sets, 145 groups are selected, and the regression results of the test sets are shown in FIG. 14. The results show that not only is the average error of the regression results smaller, but also the trend of decreasing residual height is predicted. In order to further verify the generalization ability of the network, under the condition that other conditions are not changed, only the welding current is changed to 130A, and the condition of changing the weld reinforcement is shown in FIG. 15. The same training and testing strategy as that of the 120A swash plate is adopted, wherein the regression results of the test set are obtained by training 797 groups and testing 300 groups, as shown in FIG. 16, the average absolute error is low, and the trend of decreasing the residual height is predicted.
The method is based on the cooperative prediction of the fusion depth and the weld reinforcement of the molten pool image and the depth residual error network, can clearly observe the edge and the internal details of the molten pool, monitor the changes of the fusion depth and the weld reinforcement in the forming process of the weld joint in real time, and can accurately predict the trend of the fusion depth and the weld reinforcement of the future development of the weld joint. The experiment result shows that the prediction precision of the residual height and the penetration depth cooperative prediction network on the residual height can reach 0.13mm, and the prediction precision on the penetration depth can reach 0.09 mm. The regression results of different welding parameters and different welding parts verify the accuracy of the network, and can be used for real-time regulation and control of welding quality.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (4)

1. The height and fusion depth collaborative prediction method based on the molten pool image and the depth residual error network is characterized in that: the method comprises the following steps:
the method comprises the following steps: reducing the change conditions of the residual height and the penetration on the length of the welding seam: cutting the welding line into two uniform halves along the length direction, polishing and corroding one half of the welding line, observing the conditions of the weld penetration and the weld reinforcement through an electron microscope, and identifying the weld reinforcement and the weld penetration;
step two: an image processing framework: learning deep features of the molten pool image through a network, and establishing a relation between the molten pool image and the residual height and the fusion depth;
step three: determining a basic network: in order to more accurately predict the change of the weld reinforcement and the weld penetration in the welding process, the input end and the output end of the network, the layer number of the network and the generalization capability of the network need to be evaluated;
step four: determining a network input end: performing Fourier transform on the molten pool original image or the molten pool original image, inputting the molten pool original image or the molten pool original image, comparing the regression effect after the spatial domain image and the frequency domain image of the molten pool are used as input ends, and selecting the frequency domain image of the molten pool image as a network input end;
step five: determining a network output: because of two output values of the excess height and the fusion depth, the output end of the network frame has a single output mode of the fusion depth or the excess height and a double output mode containing the fusion depth and the excess height, and after the input end of the network is determined, the double output mode containing the excess height and the fusion depth is selected as the output end of the network by comparing regression results of the two output modes;
step six: evaluating the network learning ability: the generalization ability of the network is an important index for evaluating the performance of the network, molten pool images of a plurality of welding seams are selected as a test set and a training set respectively, and the generalization ability of the network is evaluated through a regression result of the test set;
step seven: and (3) predicting the surplus height and the penetration depth network: manufacturing a test workpiece, testing the residual height of a welding seam from an arc starting end to an arc ending end of the test workpiece, taking data at two ends of the welding seam as a plurality of groups of training sets, taking data in the middle of the welding seam as a plurality of groups of test sets, and predicting the trend of the residual height by judging the average error of regression results of the test sets.
2. The molten pool image and depth residual error network-based height and penetration collaborative prediction method according to claim 1, characterized in that: the image processing framework adopts a depth residual error network structure, and the depth residual error network structure adopts a residual error block as a basic composition unit.
3. The molten pool image and depth residual error network-based height and penetration collaborative prediction method according to claim 1, characterized in that: and the network input end adopts a frequency domain diagram of a rear image in the molten pool as the network input end.
4. The molten pool image and depth residual error network-based height and penetration collaborative prediction method according to claim 1, characterized in that: the Resnet-34 serves as the infrastructure for the feature extraction portion of the puddle image.
CN202011090573.XA 2020-10-13 2020-10-13 Molten pool image and depth residual error network-based height and penetration collaborative prediction method Active CN111932539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011090573.XA CN111932539B (en) 2020-10-13 2020-10-13 Molten pool image and depth residual error network-based height and penetration collaborative prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011090573.XA CN111932539B (en) 2020-10-13 2020-10-13 Molten pool image and depth residual error network-based height and penetration collaborative prediction method

Publications (2)

Publication Number Publication Date
CN111932539A true CN111932539A (en) 2020-11-13
CN111932539B CN111932539B (en) 2021-02-02

Family

ID=73334437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011090573.XA Active CN111932539B (en) 2020-10-13 2020-10-13 Molten pool image and depth residual error network-based height and penetration collaborative prediction method

Country Status (1)

Country Link
CN (1) CN111932539B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733889A (en) * 2020-12-28 2021-04-30 山东大学 Perforation plasma arc welding perforation and fusion penetration state identification method based on improved VGG16 deep neural network
CN113290302A (en) * 2021-03-15 2021-08-24 南京理工大学 Quantitative prediction method for surplus height of electric arc welding additive manufacturing
CN113441815A (en) * 2021-08-31 2021-09-28 南京南暄励和信息技术研发有限公司 Electric arc additive manufacturing layer width and residual height cooperative control method based on deep learning
CN113706485A (en) * 2021-03-16 2021-11-26 南京理工大学 Advanced quantitative prediction method for weld reinforcement
CN115294105A (en) * 2022-09-28 2022-11-04 南京理工大学 Multilayer multi-pass welding remaining height prediction method
CN115609110A (en) * 2022-11-22 2023-01-17 南京理工大学 Electric arc composite additive melting depth prediction method based on multimode fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334807A (en) * 2008-07-28 2008-12-31 中国航空工业第一集团公司北京航空制造工程研究所 Electro-beam welding joint melting-coagulation area shape factor modeling and solving method
US20170175388A1 (en) * 2013-05-02 2017-06-22 360º BALLISTICS, LLC Process to Add Bullet Resistance to An Existing Wall
CN108500498A (en) * 2018-03-26 2018-09-07 华中科技大学 A kind of appearance of weld quality monitoring method
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN110325929A (en) * 2016-12-07 2019-10-11 阿瑞路资讯安全科技股份有限公司 System and method for detecting the waveform analysis of cable network variation
CN110363781A (en) * 2019-06-29 2019-10-22 南京理工大学 Molten bath profile testing method based on deep neural network
CN110472698A (en) * 2019-08-22 2019-11-19 四川大学 Increase material based on the metal of depth and transfer learning and shapes fusion penetration real-time predicting method
CN111177976A (en) * 2019-12-25 2020-05-19 广东省焊接技术研究所(广东省中乌研究院) Arc welding seam forming accurate prediction method based on deep learning
CN111739019A (en) * 2020-07-29 2020-10-02 南京知谱光电科技有限公司 Material increase residual height prediction method based on long-range prediction of molten pool image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334807A (en) * 2008-07-28 2008-12-31 中国航空工业第一集团公司北京航空制造工程研究所 Electro-beam welding joint melting-coagulation area shape factor modeling and solving method
US20170175388A1 (en) * 2013-05-02 2017-06-22 360º BALLISTICS, LLC Process to Add Bullet Resistance to An Existing Wall
CN110325929A (en) * 2016-12-07 2019-10-11 阿瑞路资讯安全科技股份有限公司 System and method for detecting the waveform analysis of cable network variation
CN108500498A (en) * 2018-03-26 2018-09-07 华中科技大学 A kind of appearance of weld quality monitoring method
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN110363781A (en) * 2019-06-29 2019-10-22 南京理工大学 Molten bath profile testing method based on deep neural network
CN110472698A (en) * 2019-08-22 2019-11-19 四川大学 Increase material based on the metal of depth and transfer learning and shapes fusion penetration real-time predicting method
CN111177976A (en) * 2019-12-25 2020-05-19 广东省焊接技术研究所(广东省中乌研究院) Arc welding seam forming accurate prediction method based on deep learning
CN111739019A (en) * 2020-07-29 2020-10-02 南京知谱光电科技有限公司 Material increase residual height prediction method based on long-range prediction of molten pool image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HU JIE等: "Squeeze-and-Excitation Networks", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
董昌文: "基于焊接熔池补偿气体射流方法的不锈钢脉冲MIG高速焊研究", 《中国博士学位论文全文数据库工程科技Ⅰ辑》 *
韩静: "基于整流发电机故障诊断的电力推进系统保护研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733889A (en) * 2020-12-28 2021-04-30 山东大学 Perforation plasma arc welding perforation and fusion penetration state identification method based on improved VGG16 deep neural network
CN113290302A (en) * 2021-03-15 2021-08-24 南京理工大学 Quantitative prediction method for surplus height of electric arc welding additive manufacturing
CN113706485A (en) * 2021-03-16 2021-11-26 南京理工大学 Advanced quantitative prediction method for weld reinforcement
CN113441815A (en) * 2021-08-31 2021-09-28 南京南暄励和信息技术研发有限公司 Electric arc additive manufacturing layer width and residual height cooperative control method based on deep learning
CN113441815B (en) * 2021-08-31 2021-11-16 南京南暄励和信息技术研发有限公司 Electric arc additive manufacturing layer width and residual height cooperative control method based on deep learning
CN115294105A (en) * 2022-09-28 2022-11-04 南京理工大学 Multilayer multi-pass welding remaining height prediction method
CN115609110A (en) * 2022-11-22 2023-01-17 南京理工大学 Electric arc composite additive melting depth prediction method based on multimode fusion
CN115609110B (en) * 2022-11-22 2023-12-15 南京理工大学 Electric arc composite additive penetration prediction method based on multimode fusion

Also Published As

Publication number Publication date
CN111932539B (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN111932539B (en) Molten pool image and depth residual error network-based height and penetration collaborative prediction method
CN105855743B (en) A kind of Weld pool dynamic process on-line monitoring system and method
Lei et al. Real-time weld geometry prediction based on multi-information using neural network optimized by PCA and GA during thin-plate laser welding
CN113290302A (en) Quantitative prediction method for surplus height of electric arc welding additive manufacturing
CN108568596B (en) Laser processing device and machine learning device
US20150056585A1 (en) System and method monitoring and characterizing manual welding operations
JP6920557B2 (en) A method for automatically determining the optimum welding parameters for performing work piece welding
Chen et al. Prediction of weld bead geometry of MAG welding based on XGBoost algorithm
CN112017186A (en) Material increase and residual height prediction method based on molten pool image and depth residual error network
US20230191540A1 (en) Method, device and computer program for determining the performance of a welding method via digital processing of an image of the welded workpiece
CN112894138A (en) Soft package battery tab welding method and system
Wang et al. Weld reinforcement analysis based on long-term prediction of molten pool image in additive manufacturing
CN111738369A (en) Weld penetration state and penetration depth real-time prediction method based on visual characteristics of molten pool
Yu et al. Deep learning based real-time and in-situ monitoring of weld penetration: Where we are and what are needed revolutionary solutions?
CN116597391A (en) Synchronous on-line monitoring method for weld surface morphology and penetration state
CN115775249A (en) Additive manufacturing part forming quality monitoring method and system and storage medium
KR102467093B1 (en) Welding condition adjusting apparatus
CN1502970A (en) Characteristic value calculation for welding detection
CN115294105B (en) Multilayer multi-pass welding remaining height prediction method
JP2023095820A (en) Generation method of learning model, control device, learning model, welding system and program
CN116664508A (en) Weld surface quality detection method and computer readable storage medium
Guo et al. Construction of welding quality intelligent judgment system
KR20230118341A (en) Method and apparatus for generating arc image-based welding quality inspection model using deep learning and arc image-based welding quality inspecting apparatus using the same
SE526624C2 (en) Evaluation of welding quality
CN111178450A (en) Method and device for evaluating state of welding seam

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant