CN116758394A - Prediction method for forming section of rivet-free riveting joint based on deep learning - Google Patents

Prediction method for forming section of rivet-free riveting joint based on deep learning Download PDF

Info

Publication number
CN116758394A
CN116758394A CN202310647377.5A CN202310647377A CN116758394A CN 116758394 A CN116758394 A CN 116758394A CN 202310647377 A CN202310647377 A CN 202310647377A CN 116758394 A CN116758394 A CN 116758394A
Authority
CN
China
Prior art keywords
image
layer
deep learning
convolution
rivet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310647377.5A
Other languages
Chinese (zh)
Inventor
刘洋
吴庆军
代祎琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Technology
Original Assignee
Qingdao University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Technology filed Critical Qingdao University of Technology
Priority to CN202310647377.5A priority Critical patent/CN116758394A/en
Publication of CN116758394A publication Critical patent/CN116758394A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application discloses a method for predicting a formed section of a rivetless riveting joint based on deep learning, which comprises the following steps: step one, collecting condition information; establishing a prediction model of the countermeasure network based on the conditions of the CNN and the residual block, wherein the generation of the prediction model of the countermeasure network based on the conditions of the CNN and the residual block comprises the following steps: the system comprises a generator and a discriminator, wherein layers 1, 3, 5, 7 and 9 of the generator are convolution residual blocks, layers 2, 4, 6 and 8 are transposed residual blocks, and layer 10 is an activation function layer; training a prediction model of the challenge network generated based on the CNN and provided with the residual block by using a training data set to obtain a deep learning network model; and step four, inputting random noise and condition information into a deep learning network model to obtain a section image formed by the rivetless riveting joint. The application has the characteristics of greatly shortening the prediction time, improving the accuracy and eliminating the grid division and the time step.

Description

Prediction method for forming section of rivet-free riveting joint based on deep learning
Technical Field
The application relates to the technical field of image processing, in particular to a method for predicting a formed section of a rivetless riveting joint based on deep learning.
Background
As lightweight materials are increasingly used in vehicle body structures, multi-material hybrid structures present challenges to the joining technology. Under the background, no rivet riveting is realized, an extra fastener is not needed, a self-locking structure is formed through plastic deformation of the base plate, and the joint has higher strength and fatigue performance through controlling the connecting process parameters. The rivet-free riveting technology can be used for connecting dissimilar materials and materials difficult to weld, reduces the weight and manufacturing cost of the junction member, and is widely applied to the industries of automobiles and household appliances.
However, the forming quality of the rivet-free riveting joint is affected by parameters such as the thickness of a substrate, the shape of a punch, the shape of a lower die, the displacement of the punch and the like, and if the selection of the parameters of the connecting process is not proper, the joint self-locking amount and the neck thickness are easy to be small, and even the neck fracture is generated. Aiming at specific connecting materials, the optimal riveting parameters of the joint are mainly obtained through a large number of experiments at present, the experiment cost is high and time is consumed, the development period of products is prolonged, a forming simulation model of the joint is established, and the forming section shape of the joint is predicted by adopting a numerical simulation method, so that the optimization of the model shape and the riveting process parameters is one of effective solutions for shortening the development period of vehicles. However, the rivetless riveting simulation involves material damage, complex multi-body contact and large deformation of grids, smaller grid sizes are often adopted in the simulation to increase prediction accuracy, and more time is required for traditional finite element simulation analysis.
Disclosure of Invention
The application aims to design and develop a prediction method for the formed section of a rivetless riveting joint based on deep learning, and the formed section of the rivetless riveting joint is predicted through a cGAN network, so that the prediction time is greatly shortened, and the accuracy is improved.
The technical scheme provided by the application is as follows:
a prediction method for a formed section of a rivetless riveted joint based on deep learning comprises the following steps:
step one, collecting condition information: punch displacement, upper plate thickness, lower die diameter and groove depth;
establishing a CNN-based condition with a residual block to generate a prediction model of the countermeasure network;
wherein the generating a prediction model of the countermeasure network based on the CNN and with the residual block includes: a generator and a arbiter;
the 1 st, 3 rd, 5 th, 7 th and 9 th layers of the generator are all convolution residual blocks, the 2 nd, 4 th, 6 th and 8 th layers are all transposed residual blocks, the 10 th layer is an activation function layer, and the objective function of the generator is as follows:
wherein V (G) is the least squares loss of the generator G, E c,z D (G (c, z) gives the arbiter D the probability that the generated data G (c, z) and the condition information is c is the true data, λ is the regularization coefficient, and F (G) is the regularization term, for the expectation that the condition information is c and the random noise is z;
training the CNN-based prediction model of the conditional generation countermeasure network with residual blocks by using a training data set to obtain a deep learning network model;
and step four, inputting random noise and condition information into the deep learning network model to obtain a section image formed by the rivetless riveting joint.
Preferably, the convolution residual block includes a convolution layer, a batch normalization layer, a rectification layer, a convolution layer, and a batch normalization layer connected in sequence.
Preferably, the transposed residual block comprises a transposed convolution adjustment channel arranged in parallel, and a transposed convolution layer, a batch standardization layer and a rectification layer which are connected in sequence;
the convolution kernel size of the transposed convolution adjustment channel is 1×1, and the convolution kernel size of the transposed convolution layer is 3×3.
Preferably, the 1 st to 4 th layers of the discriminator are all composed of a convolution layer, a batch standardization layer and a rectification layer, the 5 th layer is the convolution layer, and the objective function of the discriminator is as follows:
wherein V (D) is the least squares loss of the arbiter D, E x,c For the expectation when the real data is x and the condition information is c, D (x, c) gives the probability that the real data x and the condition information c are the real data to the arbiter D.
Preferably, the rectification layer in the transposed residual block is a linear rectification function.
Preferably, the activation function layer is a normalized exponential function.
Preferably, the training data set is a label image and an enhanced image;
the label image acquisition process comprises the following steps:
step 1, obtaining a section image of a rivetless riveting joint as a sample image through experiments and multiple simulation models;
step 2, labeling the sample image, and reserving geometric shapes and position information of materials after dividing the sample image according to an upper plate and a lower plate to obtain a label image corresponding to the sample image;
the enhanced image is a rotation of the label image by a random color adjustment, shading adjustment, translation, blurring, cropping, sharpening, flipping, scaling or random angle, and the maximum angle of rotation is 12 °.
Preferably, the enhanced image includes: the image after the brightness adjustment of the label image, the image after the color adjustment of the label image, the image after the random angle rotation of the maximum angle of 12 degrees of the label image, the image after the left-right overturn of one half of the label image and the image after the clipping of one half of the label image are in proportion of 90% of the original image.
Preferably, the training parameters of the CNN-based condition with residual block to generate the prediction model of the countermeasure network include:
training optimizers, learning rate, regularization coefficients, batch size, and loss function.
Preferably, the rectifying layer in the discriminator is a leaky linear rectifying function.
The beneficial effects of the application are as follows:
(1) According to the prediction method for the rivet-free riveting joint forming section based on deep learning, which is designed and developed by the application, after a cGAN deep neural network in the deep learning field is adjusted and constructed, the cGAN deep neural network is introduced into engineering application of rivet-free riveting forming section images, so that the measurement time of geometric parameters of the rivet-free riveting section can be effectively shortened, the subjective factor interference of a measurer is avoided, the measurement precision and efficiency are improved, the complexity of experiments is reduced, the experimental cost is reduced, and the experimental period is shortened;
(2) According to the deep learning-based prediction method for the rivet-free riveting joint forming section, which is designed and developed by the application, because the cGAN deep neural network is based on experimental data in training and has no assumption (data driving), the prediction result is more similar to the experimental result, the cGAN deep neural network can process complex nonlinear relations, can better capture potential structures and characteristics in the data, and the generator can generate high-quality rivet-free riveting section geometric parameter measurement data, so that a training data set is enlarged, and the generalization capability of a model is improved;
(3) Compared with the traditional FEM simulation, the prediction method for the formed section of the rivetless riveting joint based on deep learning has the advantages that once training is finished, the prediction time is extremely short (within a few seconds), and in addition, the problems of grid division and time step are eliminated, so that the method has higher efficiency in terms of calculation complexity and calculation resource consumption, can promote the technical development and progress of the field, and provides a more accurate, efficient and reliable method for future rivetless riveting engineering application.
Drawings
Fig. 1 is a schematic flow chart of a method for predicting a formed section of a rivetless riveted joint based on deep learning.
Fig. 2 is a schematic structural diagram of a CNN-based prediction model with residual block for generating an countermeasure network according to the present application.
Fig. 3 is a schematic structural diagram of a convolution residual block according to the present application.
Fig. 4 is a schematic structural diagram of a transposed residual block according to the present application.
Fig. 5 is a schematic structural diagram of the discriminator according to the application.
Fig. 6 is a schematic structural diagram of a lower die according to the embodiment of the application.
Fig. 7 is a schematic diagram of a label image in an embodiment of the application.
Fig. 8 is a schematic view of an enhanced image according to an embodiment of the present application.
FIG. 9 is an image schematic view of a screenshot of the formation of a predicted rivetless rivet joint in accordance with an embodiment of the present application.
Fig. 10 is a schematic diagram of a practical cross section obtained by finite element simulation in the embodiment of the present application.
Detailed Description
The present application is described in further detail below to enable those skilled in the art to practice the application by reference to the specification.
As shown in FIG. 1, the method for predicting the formed section of the rivetless riveting joint based on deep learning provided by the application specifically comprises the following steps:
step one, collecting condition information: punch displacement, upper plate thickness, lower die diameter and groove depth;
establishing a CNN-based condition with a residual block to generate a prediction model (cGAN) of the countermeasure network;
wherein, as shown in fig. 2, cGAN is a generative model consisting of two neural networks, a generator and a arbiter, the generator generating images from random noise, and the arbiter attempting to distinguish between true images and generated images, the two networks being gambling with each other until the generator can generate sufficiently realistic images, the generator comprising:
the 1 st, 3 rd, 5 th, 7 th and 9 th layers are all convolution residual blocks, the 2 nd, 4 th, 6 th and 8 th layers are all transposed residual blocks, the 10 th layer is an activation function layer and is used for improving an input image, and the objective function of the generator is as follows:
wherein V (G) is the least squares loss of the generator G, E c,z D (G (c, z) gives the arbiter D the probability that the generated data G (c, z) and the condition information is c is the true data, λ is the regularization coefficient, and F (G) is the regularization term, for the expectation that the condition information is c and the random noise is z;
the regularization term satisfies:
F(G)=F L1 (G)=E x,c,z [||x-G(c,z)|| 1 ];
wherein F is L1 (G) To use the L1 norm as a function of regularization term, E x,c,z For the expectation that the real data is x, the condition information is c and the random noise is z, i x-G (c, z) i 1 For the L1 norm between the real data x and the generated data G (c, z), i.e. the sum of the absolute values between them.
The residual block is a structure capable of skipping one or more layers of network, and can avoid the problem of gradient disappearance, as shown in fig. 3, the convolution residual block comprises a convolution layer, a batch standardization layer, a rectification layer, a convolution layer and a batch standardization layer which are sequentially connected, and is used for extracting features, and the convolution kernel size of the convolution layer is 3×3;
the rectifying layer is a ReLU activation function and meets the following conditions:
wherein f 1 (x) For the output value of the ReLU activation function, x 1 The input value for the ReLU activation function.
As shown in fig. 4, the transposed residual block includes a transposed convolution adjustment channel, a transposed convolution layer, a batch standardization layer, and a rectification layer that are sequentially connected, where the convolution kernel size of the transposed convolution adjustment channel is 1×1, and is used to adjust the number of image channels of the transposed residual block, the convolution kernel size of the transposed convolution layer is 3×3, if the number of channels of the image output by the convolution residual block is the same as the number of channels of the transposed convolution layer, the output of the convolution residual block directly passes through the transposed convolution layer, the batch standardization layer, and the rectification layer that are sequentially connected, and if the number of channels of the image output by the convolution residual block is different from the number of channels of the transposed convolution layer, the output of the convolution residual block adjusts the number of channels through the transposed convolution adjustment channel, and the rectification layer is a ReLU activation function.
The activation function layer may use a normalized exponential function (Softmax) and a tanh function for the output generated image, but because the Softmax function may map the input to a probability distribution between (0, 1) such that the sum of the outputs is 1, the tanh function may map the input to a value between (-1, 1) such that the average of the outputs is 0, because the Softmax function can better distinguish between the probabilities of different classes (mainly for multi-class), while the tanh function may cause the probability to fluctuate between-1 and 1, the activation function layer of the present application selects the Softmax function.
As shown in FIG. 5, the 1 st to 4 th layers of the discriminator are all composed of a convolution layer, a batch standardization layer and a rectification layer, the 5 th layer is a convolution layer, compared with other effects obtained by using GAN loss+L1loss or using GAN loss+CEE, the effect obtained by using GAN loss+CEE is more accurate than the original Loss function, and the objective function of the discriminator is as follows:
wherein V (D) is the least squares loss of the arbiter D, E x,c For the expectation that the real data is x and the condition information is c, D (x, c) is a probability that the real data x and the condition information c are the real data given by the discriminator D;
the rectification layer is a LeakReLU function and meets the following conditions:
wherein f 2 (x 2 ) For the output value of the LeakReLU function, x 2 The input value of the LeakReLU function is that alpha is a positive number under comparison;
in this embodiment, α=0.3, and 0.3 represents the slope of the negative half-section, i.e., when the input is positive, the output is equal to the input, and when the input is negative, the output is equal to the input multiplied by 0.3.
In the present embodiment, the cGAN model is built by one of the deep learning frameworks Tensorflow, keras, pytorch, caffe, theano, paddlePaddle, MXNet, CNTK, chainer, deeplearning j.
Training the CNN-based prediction model with residual block of the conditional generation countermeasure network by using a training data set to obtain a deep learning network model, wherein the deep learning network model comprises the following steps:
1. constructing an initial data set:
this dataset should contain enough samples so that a large number of rivet-free connected images are obtained through multiple simulation models and OM pictures acquired in experiments as sample images of the application so that the cGAN model can learn from it to train its own model;
2. data preprocessing:
labeling the sample images in the initial data set through a labeling tool, labeling the outlines of an upper plate and a lower plate of the images, then taking other areas as the background, removing unnecessary information contained in the OM images obtained through experiments, such as surface details, textures of materials, image contrast, noise and the like, and only retaining basic information such as the geometric shapes and positions of the materials to obtain labeled images corresponding to the sample images;
3. augmenting an initial data set using data augmentation:
randomly performing one or more of color adjustment, brightness adjustment, translation, blurring, scaling, cutting, sharpening or rotation of random angles on the label image, wherein the maximum angle of rotation is 12 degrees, so that sample images of enough initial training sets are generated, and further an enhanced image training set is obtained;
in this embodiment, the tool may be selected from one or more combinations of OpenCV, imgaug, skimage, PIL, augmentor, albumentations.
The data enhancement is not limited to the data pretreatment, and the data enhancement can be performed first and then the data pretreatment can be performed;
4. training the cGAN model:
wherein the training parameters include: training an optimizer, a learning rate, a regularization coefficient, a batch size and a loss function;
in the training process, firstly, parameters of a generator and a discriminator are initialized, super parameters such as a learning rate, an optimizer and a loss function are set, then a batch of real data x and corresponding condition information c are randomly extracted from a training data set formed by an enhanced image training set and a label image, random noise z and the condition information c are input into the generator to obtain a batch of generated data G (c, z), the generated data G (c, z) and the condition information c are input into the discriminator to obtain a discrimination result D (G, z), the real data x and the corresponding condition information c are input into the discriminator to obtain the discrimination result D (x, c), the loss function of the discriminator is calculated, the parameters of the discriminator are updated according to the loss function, so that D (x, c) is close to 1, D (G, z), c) is close to 0, then the training is carried out again, the loss function of the generator is calculated, and the parameters of the generator are updated through feedback of the discriminator, so that D (G, c) is close to 1 until convergence is achieved.
The training process is to input condition information and random noise vectors into a generator, the discriminator receives three inputs, namely condition information, generates pictures and real pictures, adjusts the two to the shape with the same length and height through python codes, and enables the two to be spliced together in the channel dimension, the training process can guide the generator to generate a profile image conforming to the condition, the generator generates a segmented image of a profile joint and transmits the segmented image to the discriminator, the discriminator also receives an external real image and the condition information, the condition information and the image are spliced in the channel dimension in the discriminator to help the discriminator distinguish real and generated profile images, the generator and the discriminator are continuously trained, so that the generator can generate more real data, and the steps are repeated until the generator can generate a sufficient real image with high accuracy.
In this embodiment, the loss function curves of the generator and the arbiter are observed to see if they tend to be stable or periodically change to determine if convergence is achieved, or the quality of the sample generated by the generator is observed to see if it is clear, and if it is desired to determine if convergence is achieved.
In this embodiment, the optimizer of the overall training process may select one or more combinations of Adam, adamax, nadam, BGD, SGD, MBGD, momentum, adagrad, adadelta or RMSprop.
And step four, inputting random noise and condition information into the deep learning network model to obtain a section image formed by the rivetless riveting joint.
Examples
Step one, collecting condition information: punch displacement, upper plate thickness, lower die diameter and groove depth;
as shown in fig. 6, a schematic diagram of a lower die is shown, that is, the condition information includes: the diameter of the lower die is 9mm, the depth of the groove is 1.5mm, the displacement of the punch is 5mm, the upper plate is made of aluminum alloy, the thickness of the upper plate is 1.5mm, the lower plate is made of steel, and the thickness of the lower plate is 1.5mm;
establishing a CNN-based condition with a residual block to generate a prediction model (cGAN) of the countermeasure network;
training the CNN-based prediction model of the conditional generation countermeasure network with residual blocks by using a training data set to obtain a deep learning network model;
the initial data set is 30 images riveted without rivets, as shown in fig. 7, a tag image is obtained after data preprocessing, and then data enhancement is performed on the tag image, which specifically includes: performing brightness adjustment and color adjustment on all the label images, performing random angle rotation with a maximum angle of 12 degrees on all the label images, performing left-right overturn on one half of the label images, performing cutting with a scale of 0.9 on one half of the label images, and enhancing the image as shown in fig. 8;
defining the Loss function as a conventional Loss function and an L1Loss function, adopting an Adam optimizer in the whole training process, wherein Bayesian optimization is a sequential design strategy for global optimization, and the global optimal solution can be found faster without assuming any function form, so that the learning rate is set to 3.218 multiplied by 10 through Bayesian optimization -7 Regularization coefficient lambda was set to 10 -6 The batch size is 16, 3 dropout layers are used in the discriminator to reduce the interdependence among the features and prevent overfitting, the generalization capability of the model is improved, 545 epochs are trained altogether, and a good effect is obtained;
step four, random noise and condition information are input into the deep learning network model, and a section image formed by the rivet-free riveting joint is obtained;
as shown in FIG. 9, the deep learning network model generates a section image of the predicted rivetless riveted joint formation, as shown in FIG. 10, the section image geometric parameter precision of the section image of the rivetless riveted joint formation predicted by the model disclosed by the application reaches 98% by comparing the joint self-locking value, the neck thickness and the bottom thickness of the two images which are measured as the section image of the finite element simulation model of the rivetless riveted joint formation which is calculated through simulation under the above condition information by abaqus software, which shows that the established deep learning network model can accurately predict the formed section shape of the rivetless riveted joint.
According to the method for predicting the formed section of the rivetless riveting joint based on deep learning, the punch displacement, the plate thickness, the diameter of a specific lower die and the depth of a groove are input into a trained deep learning model, the formed thick section image of the rivetless riveting connecting piece can be directly predicted, the problems that in the prior art, experiments are complex, financial resources and material resources are consumed, the period is long, accurate parameters are required in the process of carrying out numerical simulation analysis on the complex model, a great amount of time and effort are consumed and the like are solved, the measuring time of the rivetless riveting measurement section shape is greatly improved, the analysis efficiency is improved, the defects of manual measurement and marking of the section image in the prior experiment are overcome, the limitation and cost problems caused by improper selection of technological parameters are effectively solved, support is provided for the follow-up automatic measurement section geometric parameters, experimental errors are reduced, and the network architecture is based on experimental data, so that the predicted result is closer to an actual value, the analysis process is simple, the cost is low, and powerful support is provided for the application of a lightweight material in a vehicle body structure.
Although embodiments of the present application have been disclosed above, it is not limited to the details and embodiments shown, it is well suited to various fields of use for which the application is suited, and further modifications may be readily made by one skilled in the art, and the application is therefore not to be limited to the particular details and examples shown and described herein, without departing from the general concepts defined by the claims and the equivalents thereof.

Claims (10)

1. The method for predicting the formed section of the rivetless riveting joint based on deep learning is characterized by comprising the following steps:
step one, collecting condition information: punch displacement, upper plate thickness, lower die diameter and groove depth;
establishing a CNN-based condition with a residual block to generate a prediction model of the countermeasure network;
wherein the generating a prediction model of the countermeasure network based on the CNN and with the residual block includes: a generator and a arbiter;
the 1 st, 3 rd, 5 th, 7 th and 9 th layers of the generator are all convolution residual blocks, the 2 nd, 4 th, 6 th and 8 th layers are all transposed residual blocks, the 10 th layer is an activation function layer, and the objective function of the generator is as follows:
wherein V (G) is the least squares loss of the generator G, E c,z D (G (c, z) gives the arbiter D the probability that the generated data G (c, z) and the condition information is c is the true data, λ is the regularization coefficient, and F (G) is the regularization term, for the expectation that the condition information is c and the random noise is z;
training the CNN-based prediction model of the conditional generation countermeasure network with residual blocks by using a training data set to obtain a deep learning network model;
and step four, inputting random noise and condition information into the deep learning network model to obtain a section image formed by the rivetless riveting joint.
2. The method for predicting a rivet-free riveted joint forming section based on deep learning of claim 1, wherein the convolution residual block comprises a convolution layer, a batch normalization layer, a rectification layer, a convolution layer and a batch normalization layer connected in sequence.
3. The method for predicting a rivet-free riveted joint forming section based on deep learning according to claim 2, wherein the transposed residual block comprises transposed convolution adjustment channels arranged in parallel and a transposed convolution layer, a batch normalization layer, and a rectification layer connected in sequence;
the convolution kernel size of the transposed convolution adjustment channel is 1×1, and the convolution kernel size of the transposed convolution layer is 3×3.
4. The method for predicting the formed section of the rivetless riveted joint based on deep learning according to claim 1, wherein the 1 st to 4 th layers of the discriminator are all composed of a convolution layer, a batch standardization layer and a rectification layer, the 5 th layer is the convolution layer, and the objective function of the discriminator is as follows:
wherein V (D) is the least squares loss of the arbiter D, E x,c For the expectation when the real data is x and the condition information is c, D (x, c) gives the probability that the real data x and the condition information c are the real data to the arbiter D.
5. A method of predicting a rivet-free riveted joint forming section based on deep learning as recited in claim 3, wherein the rectifying layer in the transposed residual block is a linear rectifying function.
6. The method for predicting a rivet-free riveted joint forming section based on deep learning of claim 4, wherein the activation function layer is a normalized exponential function.
7. The method for predicting a rivet-free riveted joint forming section based on deep learning of claim 1, wherein the training data set is a label image and an enhanced image;
the label image acquisition process comprises the following steps:
step 1, obtaining a section image of a rivetless riveting joint as a sample image through experiments and multiple simulation models;
step 2, labeling the sample image, and reserving geometric shapes and position information of materials after dividing the sample image according to an upper plate and a lower plate to obtain a label image corresponding to the sample image;
the enhanced image is a rotation of the label image by a random color adjustment, shading adjustment, translation, blurring, cropping, sharpening, flipping, scaling or random angle, and the maximum angle of rotation is 12 °.
8. The method for predicting a rivet-free riveted joint forming section based on deep learning of claim 7, wherein the enhanced image comprises: the image after the brightness adjustment of the label image, the image after the color adjustment of the label image, the image after the random angle rotation of the maximum angle of 12 degrees of the label image, the image after the left-right overturn of one half of the label image and the image after the clipping of one half of the label image are in proportion of 90% of the original image.
9. The method for deep learning based prediction of a rivet-free riveted joint forming section of claim 8, wherein the CNN based and residual block with conditions generating training parameters of a prediction model of an countermeasure network comprises:
training optimizers, learning rate, regularization coefficients, batch size, and loss function.
10. The method for predicting a rivet-free riveted joint forming section based on deep learning of claim 9, wherein the rectifying layer in the discriminator is a leaky linear rectifying function.
CN202310647377.5A 2023-06-02 2023-06-02 Prediction method for forming section of rivet-free riveting joint based on deep learning Pending CN116758394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310647377.5A CN116758394A (en) 2023-06-02 2023-06-02 Prediction method for forming section of rivet-free riveting joint based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310647377.5A CN116758394A (en) 2023-06-02 2023-06-02 Prediction method for forming section of rivet-free riveting joint based on deep learning

Publications (1)

Publication Number Publication Date
CN116758394A true CN116758394A (en) 2023-09-15

Family

ID=87960000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310647377.5A Pending CN116758394A (en) 2023-06-02 2023-06-02 Prediction method for forming section of rivet-free riveting joint based on deep learning

Country Status (1)

Country Link
CN (1) CN116758394A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419171A (en) * 2020-10-28 2021-02-26 云南电网有限责任公司昆明供电局 Image restoration method for multi-residual-block conditional generation countermeasure network
US20220208355A1 (en) * 2020-12-30 2022-06-30 London Health Sciences Centre Research Inc. Contrast-agent-free medical diagnostic imaging
CN115019120A (en) * 2022-05-17 2022-09-06 长三角先进材料研究院 Connecting piece profile prediction method for generating countermeasure network based on conditions
CN115601621A (en) * 2022-10-17 2023-01-13 湖北工业大学(Cn) Strong scattering medium active single-pixel imaging method based on condition generation countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419171A (en) * 2020-10-28 2021-02-26 云南电网有限责任公司昆明供电局 Image restoration method for multi-residual-block conditional generation countermeasure network
US20220208355A1 (en) * 2020-12-30 2022-06-30 London Health Sciences Centre Research Inc. Contrast-agent-free medical diagnostic imaging
CN115019120A (en) * 2022-05-17 2022-09-06 长三角先进材料研究院 Connecting piece profile prediction method for generating countermeasure network based on conditions
CN115601621A (en) * 2022-10-17 2023-01-13 湖北工业大学(Cn) Strong scattering medium active single-pixel imaging method based on condition generation countermeasure network

Similar Documents

Publication Publication Date Title
Liu et al. Deep learning in sheet metal bending with a novel theory-guided deep neural network
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN111862093A (en) Corrosion grade information processing method and system based on image recognition
CN111582059A (en) Facial expression recognition method based on variational self-encoder
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN112233129B (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
Oh et al. Deep-learning-based predictive architectures for self-piercing riveting process
CN114627383A (en) Small sample defect detection method based on metric learning
CN112183742A (en) Neural network hybrid quantization method based on progressive quantization and Hessian information
CN113012169A (en) Full-automatic cutout method based on non-local attention mechanism
CN114283325A (en) Underwater target identification method based on knowledge distillation
TW202403603A (en) Computer implemented method for the detection of anomalies in an imaging dataset of a wafer, and systems making use of such methods
CN109558898B (en) Multi-choice learning method with high confidence based on deep neural network
CN111310791A (en) Dynamic progressive automatic target identification method based on small sample number set
CN114022586A (en) Defect image generation method based on countermeasure generation network
CN116758394A (en) Prediction method for forming section of rivet-free riveting joint based on deep learning
CN116958662A (en) Steel belt defect classification method based on convolutional neural network
Baggs et al. Automated copper alloy grain size evaluation using a deep-learning CNN
CN116543204A (en) Metal plate crack identification method based on 3D convolutional neural network and displacement response
CN111242216A (en) Image generation method for generating anti-convolution neural network based on conditions
CN116311514A (en) Pedestrian detection and attitude estimation method based on 2D-3D coordinate matching strategy
CN116757068B (en) Prediction method for CFRP self-piercing riveting forming process based on deep learning
CN113837176A (en) Deep learning-based vector curve simplification method, model and model establishment method
CN114581789A (en) Hyperspectral image classification method and system
Zhang et al. A deep learning-based approach for the automatic measurement of laser-cladding coating sizes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination