CN117428291A - Weld bead fusion width quantification method based on sonogram characteristic analysis - Google Patents

Weld bead fusion width quantification method based on sonogram characteristic analysis Download PDF

Info

Publication number
CN117428291A
CN117428291A CN202311736148.7A CN202311736148A CN117428291A CN 117428291 A CN117428291 A CN 117428291A CN 202311736148 A CN202311736148 A CN 202311736148A CN 117428291 A CN117428291 A CN 117428291A
Authority
CN
China
Prior art keywords
welding
layer
spectrogram
time
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311736148.7A
Other languages
Chinese (zh)
Inventor
韩静
苏晓璁
陆骏
赵壮
高鹏
吴梓剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311736148.7A priority Critical patent/CN117428291A/en
Publication of CN117428291A publication Critical patent/CN117428291A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K9/00Arc welding or cutting
    • B23K9/095Monitoring or automatic control of welding parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K9/00Arc welding or cutting
    • B23K9/095Monitoring or automatic control of welding parameters
    • B23K9/0953Monitoring or automatic control of welding parameters using computing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Plasma & Fusion (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention discloses a weld bead fusion width quantification method based on spectrogram characteristic analysis, which comprises the following steps: determining window length and step length of short-time Fourier transform according to welding rhythm parameters, generating a spectrogram through a short-time Fourier transform method, determining time sequence intervals through parameters such as welding speed, molten pool solidification speed and the like, and converting the spectrogram into a spectrogram sequence. And then constructing a welding state real-time prediction convolutional neural network model, and carrying out real-time quantitative prediction on the width of the back face of the welding by utilizing the convolutional neural network and the spectrogram. According to the invention, the short-time Fourier transform is used for converting one-dimensional data into two dimensions so as to introduce a deep convolution network architecture, and the time sequence is used for capturing long-range time-varying information, so that the task of real-time quantitative prediction of the welding state is realized by using sound data, and meanwhile, the method has good precision and popularization capability.

Description

Weld bead fusion width quantification method based on sonogram characteristic analysis
Technical Field
The invention relates to a weld bead fusion width quantification method based on sonogram characteristic analysis, and belongs to the technical field of weld bead fusion width prediction.
Background
In modern manufacturing, welding plays a critical role as a critical process. However, over time, the cost of employment by welders continues to rise, which presents a significant challenge for conventional manual welding. In this context, welding automation becomes urgent and necessary. Automatic welding is not only beneficial to reducing labor cost, but also can improve the consistency of production efficiency and product quality. By introducing an automated welding system, instability caused by artifacts can be eliminated, thereby reducing the risk of welding defects. In addition, the automatic welding can improve the working safety in high-temperature, high-risk and other environments, and reduces the risk of personnel exposure to dangerous environments. Industry 4.0 has set forth requirements for full automation of production, thereby promoting the emergence of the requirement of automatic control of penetration state of butt welding machines. However, the current offline detection method obviously has difficulty in meeting the requirement of real-time control, and thus, the online prediction requirement for the welding penetration state before the welding seam is solidified is also generated.
During welding, interactions between a plurality of physical quantities are involved, and this coupling effect presents considerable difficulties in the physical interpretation of the welding system. Welding is a highly dynamic and complex process involving interdependencies between temperature, current, voltage, etc. parameters, which makes understanding the welding phenomenon complex and tricky. Meanwhile, the welding pool environment of the welding site is extremely severe, and factors such as high temperature, strong light, metal splashing and the like can influence the performance of the sensor and even cause irreparable damage to the sensor, so that the traditional direct sensing means lose effect under the environment. Therefore, in order to understand the welding process in depth and to achieve effective monitoring, we have to explore more advanced non-contact sensing techniques to accommodate the complexity and harsh environment of welding, thereby achieving accurate control and optimization of the welding process.
Neural networks are an emerging feature extraction technique that has attracted considerable attention in the field of computer vision. With the development of efficient architecture, the use of puddle images as neural network inputs has become a common method of predicting penetration. However, many key information is difficult to detect by images alone, which limits the performance of the penetration prediction system to some extent. Under the interference of intense arc light, splashing, metal vapor and the like and severe illumination conditions, the surface conditions of the weldment and the molten pool are difficult to clearly display. Special shooting methods are also required to obtain the oscillation and arc morphology of the molten pool, and information inside the molten pool is more difficult to capture.
In the automatic welding process, on-line prediction of welding penetration is of vital importance. Acoustic signals are a sensing means with significant advantages in this task. Compared with the image method, the sound signal is not affected by shielding, so that various interference factors possibly encountered by the image method can be overcome. The sound signal has the characteristics of less data acquisition and generation, short sample acquisition period and the like, so that the sound signal plays an important role in welding detection. The use of sound signals as input means helps to exclude a number of interfering factors and is highly sensitive to immediate and potential feature variations compared to widely used image-based solutions. However, there is still a gap in the quantitative prediction of weld penetration using acoustic signals.
Therefore, a new weld bead fusion width quantification method based on sonogram characteristic analysis is needed to solve the above problems.
Disclosure of Invention
The invention aims to provide a weld fusion width quantification method based on spectrogram characteristic analysis, which aims to solve the problems in the background technology.
The welding seam fusion width quantification method based on the sonogram characteristic analysis is characterized by comprising the following steps of:
1. the method comprises the steps that an acoustic signal generated in the welding process is collected by using an acoustic collection system, wherein the acoustic collection system comprises a microphone and a data collection card, and the microphone is fixedly connected with a welding gun and moves synchronously with the welding gun;
2. determining window length and step length of short-time Fourier transform according to welding rhythm parameters, and generating a spectrogram of a sound signal through a short-time Fourier transform method;
3. determining a time sequence interval through the welding speed and the solidification speed of a molten pool, and converting the spectrogram into a spectrogram sequence;
fourth, the method comprises the following steps: and constructing a fusion welding state real-time prediction convolutional neural network model, and predicting the fusion welding back fusion width by utilizing the fusion welding state real-time prediction convolutional neural network model and a spectrogram sequence, wherein the fusion welding state real-time prediction convolutional neural network model consists of an input downsampling layer, a dense connection module, a transmission module, a time domain information extraction layer and a full connection output module which are sequentially connected.
Furthermore, in the fourth step, the input downsampling layer is composed of a batch normalization layer, a hole convolution layer, an activation function layer and a maximum pooling layer, and is used for extracting low-level texture information and outputting the low-level texture information as a low-level texture map.
Furthermore, in the fourth step, the dense connection module is formed by connecting a plurality of convolution modules in series, and the convolution modules comprise a first batch of normalization layers, a first nonlinear activation layer, convolution layers with the length and width of a convolution kernel being 1, a second batch of normalization layers, a second nonlinear activation layer and convolution layers with the length and width of the convolution kernel being 3, which are sequentially connected.
Further, the input of each convolution module is composed of the total input of the densely connected modules and the result obtained by the channel superposition operation of the output of the last convolution module, the output is k feature graphs, and the total input is the output of the input downsampling layer.
Further, k is 32.
Furthermore, in the fourth step, the transmission module is composed of a convolution layer with the length and width of the convolution kernel being 1 and an average pooling layer, and the length and width dimensions and the step length of the average pooling layer are both 2.
Further, in the fourth step, a transmission module is connected to each of the densely connected modules.
In the fourth step, the time domain information extraction layer is used for storing the dimension of the time sequence information before the fully connected output module, the input is the time sequence length, and the fully connected layer with the output of 1 is used for extracting the characteristics in the time sequence dimension.
Furthermore, in the fourth step, the fully-connected output module includes a plurality of fully-connected layers, and the number of the output heads is finally made to be 1 by performing shrinkage on characteristic dimensions through the plurality of fully-connected layers.
Further, the dynamic range of the microphone in the sound collection system exceeds 20kHz, and the collection frequency of the data collection card is higher than 40kHz.
The beneficial effects are that: the welding seam melting width quantitative method based on the sound spectrum characteristic analysis can realize automatic quantitative prediction of the welding melting width, combines short-time Fourier transform and a convolutional neural network, captures dynamic change and time sequence information of sound signals in the welding process, and improves the accuracy of the melting width prediction. The invention collects and processes the sound signal in real time, can realize on-line monitoring of welding quality, does not need to wait for detection after welding, and can control in real time in the production process so as to reduce the generation of welding defects.
Drawings
FIG. 1 is a picture of a sound collection system;
FIG. 2 is a back view of a weld on a groove graded substrate;
FIG. 3 is a photograph of a laser line scanning system;
FIG. 4 is a fusion width curve smoothing result of a weld fusion width quantification method based on spectrogram characteristic analysis;
FIG. 5 is a diagram of a sonogram processing network structure according to the present invention;
FIG. 6 is a graph of predicted results of a weld bead fusion width quantification method based on sonogram characterization.
Description of the embodiments
The present invention is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the invention and not limiting of its scope, and various modifications of the invention, which are equivalent to those skilled in the art upon reading the invention, will fall within the scope of the invention as defined in the appended claims.
Referring to fig. 5, the welding seam fusion width quantifying method based on spectrogram characteristic analysis of the present invention includes the following steps:
1. the sound signal generated in the welding process is collected by utilizing a sound collection system, wherein the sound collection system comprises a microphone and a data collection card, the microphone is fixedly connected with a welding gun and moves synchronously with the welding gun; the acoustic signal contains complex acoustic wave patterns generated during the welding process, which contains information about the welding process. Wherein the dynamic range of the microphone should exceed 20kHz and the acquisition frequency of the data acquisition card should be higher than 40kHz.
2. Determining window length and step length of short-time Fourier transform according to welding rhythm parameters, and generating a spectrogram of a sound signal through a short-time Fourier transform method; this step uses short-time fourier transform (STFT) techniques to decompose the sound signal in time and frequency to obtain its components at different times and frequencies. The generation of the spectrogram enables the time domain characteristics of the sound signal to be converted into frequency domain characteristics, and a basis is provided for subsequent deep learning processing. The window length is 128 points, and the step length is 32 points.
3. Determining a time sequence interval through the welding speed and the solidification speed of the molten pool, and converting a spectrogram into a spectrogram sequence; the generated spectrogram is further processed, converting it into a spectrogram sequence. Such processing aids in capturing rhythmic information during welding, as welding is typically accompanied by a particular sound cadence. The construction of the spectrogram sequence enables these rhythmic variations in the welding process to be captured efficiently in subsequent analysis.
4. And constructing a fusion welding state real-time prediction convolutional neural network model, and predicting the fusion welding back fusion width by using the fusion welding state real-time prediction convolutional neural network model and a spectrogram sequence, wherein the fusion welding state real-time prediction convolutional neural network model consists of an input downsampling layer, a dense connection module, a transmission module, a time domain information extraction layer and a full connection output layer which are sequentially connected. The fusion welding state real-time prediction convolutional neural network model is a convolutional neural network architecture based on DenseNet.
The fusion welding state real-time prediction convolutional neural network model is provided based on the structure of DenseNet, and time sequence information is fused on the basis of the structure, and the network structure is designed for processing spectrogram sequences. It enables each layer to be connected with the output of all previous layers through multiple feature extraction and reuse of multiple convolutional layers in a dense connection module. Features related to weld widths are gradually extracted from the spectrogram sequence through serial processing among the plurality of modules. In the fully connected layer, the network combines the time sequence information of a longer time span, so that the information which dynamically changes in the welding process can be fully utilized. And from the characteristics processed by the network, the final output head of the full-connection layer carries out final fusion width prediction.
Preferably, the input downsampling layer consists of a batch normalization layer, a cavity convolution layer, an activation function layer and a maximum pooling layer, and is used for extracting low-level texture information. The dense connection module is formed by connecting a plurality of convolution modules in series, and the convolution modules comprise a first batch of normalization layers, a first nonlinear activation layer, convolution layers with the length and width of a convolution kernel being 1, a second batch of normalization layers, a second nonlinear activation layer and convolution layers with the length and width of the convolution kernel being 3, which are sequentially connected. The input of each convolution module consists of the total input of the densely connected modules and the result obtained by the spectrogram through the channel superposition operation, and the output is k characteristic graphs. In the present invention, k is 32. The dense connection module consists of 4 convolution modules. The transmission module consists of a convolution layer with the length and width of the convolution kernel being 1 and an average pooling layer, and the length and width size and the step length of the average pooling layer are both 2. Each densely connected module is followed by a transmission module. Before the full connection layer, the time domain information extraction layer is used for saving the dimension of time sequence information, inputting the time sequence length, outputting the full connection layer with the time sequence length of 1, and extracting the characteristics in the time sequence dimension. The full-connection output layer comprises a plurality of full-connection layers, the shrinkage in characteristic dimension is carried out through the plurality of full-connection layers, and finally the number of output heads is 1.
To train a deep learning neural network, a large number of data sets of known fusion width values are required. The network continuously adjusts the parameters of the actual fusion width value by comparing the actual fusion width value, so that the prediction result is more accurate.
In summary, the welding quality detection method and system provided by the invention utilize a mode of combining sound signal analysis and deep learning technology, so that accurate prediction of welding width is realized. Compared with the traditional method, the method has obvious advantages in terms of automation, real-time performance and accuracy, has wide application prospect, and can be applied to various welding quality control scenes.
Referring to fig. 1, the experimental system is composed of a teaching-playback type automatic welding system which is composed of a mechanical arm, a protective gas cylinder, welding wires of a welding machine, a welding substrate and other components and adopts a cold metal transition welding process, an MPA201 microphone produced by BSWATECH, an ADLink-USB2405 acquisition card and an upper computer. The microphone is rigidly connected with the welding gun through the clamp, the acquisition card acquires sound signals at 51200Hz, and the sound signals are transmitted back to the upper computer for recording. The welding process uses 2% oxygen, 98% argon shielding gas, stainless steel welding wire, welding current 160A, and welding speed kept constant at 5 mm/s.
Referring to fig. 2, a beveled substrate (bevel from 30 ° to 60 °) was used in the experiment to ensure that the change in penetration was sufficiently fine. And taking the middle 12cm on the length of the welding line to carry out butt welding, discarding abnormal data of each head and tail 1cm, which are caused by the fact that the arc starting and arc extinguishing welding speeds are consistent with those of the middle section, and using the welding width of the middle 10cm as the final acquired data.
To obtain accurate data of the backside weld width, a laser scanning system as shown in fig. 3 was established. The laser line is projected to the surface to be measured perpendicular to the weld. After photographing by a side camera, the height of each point on the laser line can be obtained by corresponding analysis. By scanning along the weld, a virtual model of the weld is reconstructed. And extracting the width of the welding line on the back according to the rapid change of each laser line. To smooth random errors in the laser measurements, the measurement data is smoothed using an averaging filter of length 5. Abnormal samples with particularly large errors are deleted and replaced by an average of 20 neighboring samples. Fig. 4 shows the results of laser measurement of the back side weld width before and after the smoothing operation. The smoothed data is then fed into a prediction system.
Because the redundancy degree of the directly sampled sound signal data is very high, the characteristics are not obvious, and the end-to-end prediction of the back welding width is difficult to be carried out according to the directly sampled sound signal. The sound signal is often preprocessed, and the subsequent task is continued after the feature is extracted.
Frequency analysis of sound is common in various tasks, frequency domain signals contain rich characteristic information, and time-frequency domain power spectrograms can express time-varying characteristics of more rich frequency domain signals. The time-frequency domain power spectrum is obtained through short-time Fourier transform, the length of a short-time Fourier transform window is determined to be 128 points according to the welding rhythm of a welding machine of cold metal transition welding, and the step length between each short-time Fourier transform window is 32 points. So that at the end of the welding cycle in the time domain there is a corresponding 64 x 64 spectrogram acquired.
And welding five welding seams. The sound data of 3 welding lines are taken as a training set, 1 welding line is taken as a verification set, and the last welding line is used as a popularization experiment. And framing the sound samples according to the relation between the position of the welding gun and the sampling time in the welding process, and matching the sound samples with the scanning back welding width. The pairing rules ensure that the moment when the welding gun is located right above the sample is the same as the moment when the sound frame ends.
Due to factors such as flow of a molten pool, conduction of temperature and the like, the penetration state of a certain position is influenced not only by welding parameters when a welding gun is positioned right above, but also by the front-back state. Therefore, the experimental purpose is determined to be based on the backward information of the time sequence, and the welding penetration state is monitored in real time, so that a basis is provided for controlling the welding parameters in the future. The input of the proposed neural network is thus designed to be the spectrum generated from the sound frames ending in 0, 10, 20, 30 power cycles, which is fed into the network as a sequence at the moment when the welding gun is positioned above a certain position.
The convolutional neural network related by the invention mainly comprises: the feature extraction part formed by connecting the densely connected modules and the transmission modules which take DenseNet-121 as prototypes in series, and the full connection part which integrates frames in a time sequence and further encodes and forms an output result.
The dense connection module is formed by connecting a plurality of convolution modules in series in a specific sequence, and each convolution module comprises the following layers: the convolution layer with the length and width of the convolution kernel being 1, the batch normalization layer activates the function, and the convolution layer with the length and width of the convolution kernel being 3. The layers are connected in sequence to form a convolution module. The convolution modules are arranged in sequence to form a dense connection module. The portion marked as dark gray circles in fig. 6 is the combination of these convolution modules.
The input of each convolution module consists of the overall input of the densely connected module and the results of all feature maps generated before through the channel superposition operation. By splicing the feature graphs, each convolution module can simultaneously utilize the feature information extracted in the previous hierarchy to realize feature reuse. The output of each convolution module in the experiments of the invention is 32 characteristic graphs.
After all feature maps generated by each dense connection module are processed to a certain degree, the feature maps enter a transmission module and are input into the next dense connection module. The transmission module is formed by combining a convolution layer and an average pool chemical layer and is used for adjusting the dimension and the size of the feature map and preparing for the input of the next dense connection module. The length and width of the convolution kernel of the convolution layer are 1, and the length and width dimensions and the step length of the pooling layer are 2. The structural design of the dense connection module and the transmission module enables the network to better extract and utilize characteristic information when processing welding sound signals, and is beneficial to improving accuracy and stability of welding back width prediction.
The time series is saved in the form of a new dimension in the tensor, and after global averaging pooling of the densely connected modules 4, the full connection in the time-series direction is used to allow the network to extract the features each node exhibits in the time-series dimension. Structural modifications were also made on the linear layer to accommodate the regression task, and detailed dimensional information of the output data of each module is shown in table 1.
Table 1 is used for a convolutional neural network of regression fusion width. Full connection (n) represents that the full connection layer has n output nodes, convolution represents a sequence consisting of a batch normalization layer, a convolution layer corresponding to the parameter description, and an activation function layer.
Verification and evaluation
And inputting the acquired data sets of 3-channel training, 1-channel verification and 1-channel test into the convolutional neural network of the framework, and training a welding back width regression model. The back width predictions for the training set and the test set are shown in fig. 6, and the smoothed results are obtained with an average filter of length 20. It can be seen that the predicted effect spreads around the actual value, and the smoothed curve can approximately describe the tendency of the back face width to widen as the groove increases. The absolute error of the predicted data is 0.3003mm on average and the mean square error is 0.1420mm on average 2 . The mean absolute error of the smoothed data was 0.2380mm and the mean square error was 0.0997mm 2
Unlike the existing method using sound signals as input, the method introduces a DenseNet advanced framework to solve the problem of lower fusion penetration state prediction accuracy. DenseNet has shown a strong fitting ability in conventional visual tasks, which brings new ideas for our approach. By adopting the CNN method, we have created advantages for future integration with other advanced technologies using different modality signals.
Furthermore, the welding process is strongly affected by the accumulated heat, which makes direct detection by means of sound signals only difficult. However, the present invention provides a solution to this problem by exploiting the time varying nature of the arc. After curve smoothing post-treatment, in the task of predicting the width of the back surface of the welding seam, the lead achievement of the same row is obtained, the mean square error reaches 0.1420mm, and the mean square error is further reduced to 0.0997mm after curve smoothing. This result brings new possibilities for performance improvement in the welding field.
Through the steps, the method combines the short-time Fourier transform and the convolutional neural network to capture the dynamic change and time sequence information of the sound signals in the welding process, so that the accuracy of back welding width prediction is improved. Through the real-time collection and processing of the sound signals, manual intervention is not needed, the back welding width condition in the welding process can be monitored in real time, and the real-time prediction of welding quality is realized before solidification.

Claims (10)

1. The welding seam fusion width quantification method based on the sonogram characteristic analysis is characterized by comprising the following steps of:
1. the method comprises the steps that an acoustic signal generated in the welding process is collected by using an acoustic collection system, wherein the acoustic collection system comprises a microphone and a data collection card, and the microphone is fixedly connected with a welding gun and moves synchronously with the welding gun;
2. determining window length and step length of short-time Fourier transform according to welding rhythm parameters, and generating a spectrogram of a sound signal through a short-time Fourier transform method;
3. determining a time sequence interval through the welding speed and the solidification speed of a molten pool, and converting the spectrogram into a spectrogram sequence;
fourth, the method comprises the following steps: and constructing a fusion welding state real-time prediction convolutional neural network model, and predicting the fusion welding back fusion width by utilizing the fusion welding state real-time prediction convolutional neural network model and a spectrogram sequence, wherein the fusion welding state real-time prediction convolutional neural network model consists of an input downsampling layer, a dense connection module, a transmission module, a time domain information extraction layer and a full connection output module which are sequentially connected.
2. The weld bead fusion width quantification method based on spectrogram characteristic analysis according to claim 1, wherein in the fourth step, the input downsampling layer is composed of a batch normalization layer, a hole convolution layer, an activation function layer and a maximum pooling layer, and is used for extracting low-level texture information and outputting the low-level texture information as a low-level texture map.
3. The weld bead fusion width quantification method based on spectrogram characteristic analysis according to claim 1, wherein in the fourth step, the dense connection module is formed by connecting a plurality of convolution modules in series, and the convolution modules comprise a first normalizing layer, a first nonlinear activating layer, a convolution layer with a convolution kernel length and width of 1, a second normalizing layer, a second nonlinear activating layer and a convolution layer with a convolution kernel length and width of 3, which are sequentially connected.
4. A weld bead fusion width quantification method based on spectrogram characteristic analysis according to claim 3, wherein the input of each convolution module consists of the result of the channel superposition operation of the overall input of the densely connected module and the output of the last convolution module, the output being k feature maps, and the overall input being the output of the input downsampling layer.
5. The method for quantifying weld fusion width based on spectrogram property analysis of claim 4, wherein k is 32.
6. The method for quantifying weld fusion width based on spectrogram characteristic analysis according to claim 1, wherein in the fourth step, the transmission module is composed of a convolution layer with a convolution kernel length and width of 1 and an average pooling layer, and the length and width dimensions and step length of the average pooling layer are 2.
7. The method for quantifying weld fusion width based on spectrogram characteristic analysis according to claim 1, wherein in the fourth step, each of the densely connected modules is followed by a transmission module.
8. The method for quantifying weld fusion width based on spectrogram characteristic analysis according to claim 1, wherein in step four, the time domain information extraction layer is used for preserving the dimension of time sequence information before the fully connected output module, the input is the time sequence length, and the fully connected layer with the output of 1 extracts the features in the time sequence dimension.
9. The method for quantifying weld bead fusion width based on spectrogram characteristic analysis according to claim 1, wherein in the fourth step, the fully-connected output module comprises a plurality of fully-connected layers, and the number of output heads is finally made to be 1 by performing shrinkage in characteristic dimension through the plurality of fully-connected layers.
10. The method for quantifying weld fusion width based on spectrogram characteristic analysis according to claim 1, wherein the dynamic range of the microphone in the sound collection system exceeds 20kHz, and the collection frequency of the data collection card is higher than 40kHz.
CN202311736148.7A 2023-12-18 2023-12-18 Weld bead fusion width quantification method based on sonogram characteristic analysis Pending CN117428291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311736148.7A CN117428291A (en) 2023-12-18 2023-12-18 Weld bead fusion width quantification method based on sonogram characteristic analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311736148.7A CN117428291A (en) 2023-12-18 2023-12-18 Weld bead fusion width quantification method based on sonogram characteristic analysis

Publications (1)

Publication Number Publication Date
CN117428291A true CN117428291A (en) 2024-01-23

Family

ID=89546425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311736148.7A Pending CN117428291A (en) 2023-12-18 2023-12-18 Weld bead fusion width quantification method based on sonogram characteristic analysis

Country Status (1)

Country Link
CN (1) CN117428291A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118002888A (en) * 2024-04-10 2024-05-10 南京理工大学 Robust real-time weld joint tracking method based on time sequence information fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379740A (en) * 2021-08-02 2021-09-10 上海工程技术大学 VPPAW fusion in-situ real-time monitoring system based on perforation molten pool image and deep learning
CN113838050A (en) * 2021-11-23 2021-12-24 昆山宝锦激光拼焊有限公司 Weld forming prediction method based on complementary two-channel convolution neural network
CN114088817A (en) * 2021-10-28 2022-02-25 扬州大学 Deep learning flat ceramic membrane ultrasonic defect detection method based on deep features
CN114309895A (en) * 2021-12-17 2022-04-12 山东大学 Deep learning method for predicting weld morphology by welding pool image
CN114905116A (en) * 2022-06-02 2022-08-16 南京理工大学 Groove weld penetration monitoring method based on feature learning
CN115685881A (en) * 2022-11-07 2023-02-03 北京科技大学 Low-stress high-precision electric arc additive process control method based on computational intelligence
CN115889975A (en) * 2023-01-31 2023-04-04 广东工业大学 Laser welding process monitoring system and method
CN116604151A (en) * 2023-05-19 2023-08-18 上海交通大学宁波人工智能研究院 System and method for monitoring MIG welding seam state based on audio-visual dual mode

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379740A (en) * 2021-08-02 2021-09-10 上海工程技术大学 VPPAW fusion in-situ real-time monitoring system based on perforation molten pool image and deep learning
CN114088817A (en) * 2021-10-28 2022-02-25 扬州大学 Deep learning flat ceramic membrane ultrasonic defect detection method based on deep features
CN113838050A (en) * 2021-11-23 2021-12-24 昆山宝锦激光拼焊有限公司 Weld forming prediction method based on complementary two-channel convolution neural network
CN114309895A (en) * 2021-12-17 2022-04-12 山东大学 Deep learning method for predicting weld morphology by welding pool image
CN114905116A (en) * 2022-06-02 2022-08-16 南京理工大学 Groove weld penetration monitoring method based on feature learning
CN115685881A (en) * 2022-11-07 2023-02-03 北京科技大学 Low-stress high-precision electric arc additive process control method based on computational intelligence
CN115889975A (en) * 2023-01-31 2023-04-04 广东工业大学 Laser welding process monitoring system and method
CN116604151A (en) * 2023-05-19 2023-08-18 上海交通大学宁波人工智能研究院 System and method for monitoring MIG welding seam state based on audio-visual dual mode

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118002888A (en) * 2024-04-10 2024-05-10 南京理工大学 Robust real-time weld joint tracking method based on time sequence information fusion

Similar Documents

Publication Publication Date Title
US20210318673A1 (en) In-Situ Inspection Method Based on Digital Data Model of Weld
CN106984813B (en) A kind of melt-processed process coaxial monitoring method and device in selective laser
CN117428291A (en) Weld bead fusion width quantification method based on sonogram characteristic analysis
CN108340088A (en) Laser precision machining visual on-line monitoring method and system
CN108067714B (en) Online monitoring and defect positioning system and method for end connection quality of thin-wall circular seam
CN108931535A (en) A kind of laser gain material manufacture gas hole defect on-line monitoring method
CN207205270U (en) A kind of 3D printing successively detects reverse part model and positioning defect device
CN111203639B (en) Double-laser-beam bilateral synchronous welding filler wire molten drop transition monitoring system and method based on high-speed camera shooting
CN111061231B (en) Weld assembly gap and misalignment feed-forward molten pool monitoring system and penetration monitoring method
CN206567687U (en) Detect the penetration control device of frequency of oscillation in a kind of pulse laser exciting TIG molten baths
CN111761819B (en) Online monitoring method for defects of laser powder bed melting forming part
CN112157368A (en) Laser non-penetration welding seam penetration nondestructive testing method
TWI632968B (en) Prediction method of electrical discharge machining accuracy
Liu et al. Real-time defect detection of laser additive manufacturing based on support vector machine
Gao et al. Feature extraction of laser welding pool image and application in welding quality identification
CN113554587A (en) Molten pool image geometric feature extraction method and system based on deep learning
Tang et al. A new method to assess fiber laser welding quality of stainless steel 304 based on machine vision and hidden Markov models
CN115266951A (en) Method and system for monitoring internal defects in selective laser melting process in real time on line
Otieno et al. Imaging and wear analysis of micro-tools using machine vision
CN103543157B (en) Off-line strip surface image simulation dynamic collecting method and device
Omlor et al. Inline process monitoring of hairpin welding using optical and acoustic quality metrics
Hong et al. AF-FTTSnet: An end-to-end two-stream convolutional neural network for online quality monitoring of robotic welding
Ding et al. Machine-vision-based defect detection using circular Hough transform in laser welding
Wang et al. On-line defect recognition of MIG lap welding for stainless steel sheet based on weld image and CMT voltage: Feature fusion and attention weights visualization
CN110147818B (en) Sparse representation-based laser welding forming defect prediction classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination