CN113935989A - Metal material fracture fatigue strip identification and segmentation method based on deep learning - Google Patents

Metal material fracture fatigue strip identification and segmentation method based on deep learning Download PDF

Info

Publication number
CN113935989A
CN113935989A CN202111392841.8A CN202111392841A CN113935989A CN 113935989 A CN113935989 A CN 113935989A CN 202111392841 A CN202111392841 A CN 202111392841A CN 113935989 A CN113935989 A CN 113935989A
Authority
CN
China
Prior art keywords
fatigue
strip
image
fracture
fatigue strip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111392841.8A
Other languages
Chinese (zh)
Inventor
张啸尘
李峻州
张天
孟维迎
龙彦泽
李颂华
周鹏
石怀涛
丁兆洋
张宇
邹德芳
李翰文
范才子
金兰茹
刁梦楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Jianzhu University
Original Assignee
Shenyang Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Jianzhu University filed Critical Shenyang Jianzhu University
Priority to CN202111392841.8A priority Critical patent/CN113935989A/en
Publication of CN113935989A publication Critical patent/CN113935989A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a metal material fracture fatigue strip identification and segmentation method based on deep learning, and relates to the technical field of metal fractures. Firstly, establishing a fracture fatigue strip mathematical model; then simulating the extension track of the fatigue strip, and combining a mathematical model of the fracture fatigue strip to obtain a texture curve image of the fatigue strip; acquiring a fracture image, marking a strip of the fracture image to obtain a marked data sample, and establishing a fatigue strip sample data set by combining a fatigue strip curve image; preprocessing a fatigue stripe sample data set picture; building a fatigue strip recognition and segmentation model based on a neural network, and performing model training by using marked fatigue strip sample data; and finally, inputting the fatigue strip image to be identified and segmented into the trained fatigue strip identification model, and performing strip identification and segmentation on the fracture fatigue strip image. The method can be used for segmenting the fatigue strip characteristic region from the complex fatigue fracture with high accuracy.

Description

Metal material fracture fatigue strip identification and segmentation method based on deep learning
Technical Field
The invention relates to the technical field of metal fractures, in particular to a metal material fracture fatigue strip identification and segmentation method based on deep learning.
Background
In engineering mechanisms and mechanical equipment, a fatigue fracture phenomenon easily occurs to a component which is in service under alternating stress for a long time, and serious economic loss is caused, so that the research strength of people on fatigue fracture is continuously increased. The research on the metal fracture diagram is an important step for analyzing the fatigue fracture, and the determination of the fatigue strip spacing to quantitatively analyze the fatigue life and the fatigue stress is an important index and link for analyzing the metal fracture diagram. However, due to the complexity of the alternating stress borne by the member in the fracture process, the actual member fracture is in a diversified mixed form, so that a fatigue fracture picture acquired by an electron microscope is in a complex multi-feature shape, and the observation and analysis are difficult. It makes sense to accurately identify and isolate bands in complex topographies.
Early identification methods relied primarily on the experience of researchers in the field of materials and others, and employed manual visual identification. Although the accuracy rate is high by using expert experience, the method is low in recognition efficiency, long in required time and labor-wasting, and the method is not basically applied. With the continuous development of computer technology, how to apply the image processing of computer and the pattern recognition technology to perform the morphology analysis of fractures, and recognition and segmentation become a hot plate in the fracture research field.
In the fracture research field, a gray level co-occurrence matrix (GLCM) is a classic second-order texture statistical method, which is regarded as a classic and effective method for identifying and classifying the fatigue fracture morphology in the early stage, and not only reflects the distribution characteristics of each gray level, but also reflects the position distribution characteristics between pixels with the same or closer gray levels. GLCM is used for texture analysis, a group of statistics calculated by the GLCM is adopted for the characteristics of texture recognition, the statistics describe the statistical characteristics of the image texture in different aspects, but certain correlation exists among the statistics, so that information expression redundancy is repeated, the calculation amount is greatly increased, a correlation coefficient matrix of the texture characteristic parameters is obtained, characteristic parameters with larger correlation coefficients are removed, and the statistics which are independent of each other can be obtained. The method is more accurate for fracture image segmentation to a certain extent, greatly reduces the calculated amount and improves the image processing efficiency.
However, when the method is used in the specific field of texture segmentation, the effect is difficult to meet the requirements, in the segmentation of the strips, the complexity of the fracture process causes that the actual fracture is often represented by complex multi-features, the fatigue strips are difficult to be accurately segmented by single features, and the fracture morphology analysis of the next step is difficult to be carried out when the accuracy is low.
The fracture image identification and segmentation based on the wavelet packet transformation is also a typical method. The wavelet packet transform is an extension to the improvement of the wavelet transform. The multi-resolution theory of wavelet analysis further divides the frequency of the wavelet subspace in a binary manner, and further improves the resolution of the frequency domain period. The essence of wavelet packet transform, namely, the linear combination of orthogonal wavelet packet basis functions is used for approximating the original signal, the method is a more refined signal analysis method, and corresponding frequency bands can be selected more freely according to the characteristics of the analyzed signal, so that the signal frequency spectrums are matched, the time domain resolution is improved, and corresponding characteristics can be extracted better.
However, wavelet packet transformation, as a time-frequency analysis tool, can handle the mid-frequency as well as mid-high frequency bands in image texture details. But the detailed part of the fracture image also includes very rich stripe texture information. The characteristics of the detail part are difficult to express by single time-frequency analysis, so that the fatigue strips are difficult to accurately segment from the complex texture form by the extracted single texture characteristics.
Today, the development of scientific technology is progressing, and the traditional machine learning and feature recognition techniques are gradually replaced by deep learning. First, image recognition based on deep learning is significantly better than conventional image processing methods in terms of processing complex and redundant samples in segmentation. Since the deep neural network can be regarded as a high-dimensional feature processor, when a complex multi-feature fracture image is faced, the features of the fracture can be identified and segmented with high accuracy and high efficiency by using the deep neural network. In addition, the deep learning method has the advantages of strong learning capability, wide coverage range, strong adaptability, good transportability and the like, and is very suitable for complex characteristic fracture images.
The current image recognition method based on deep learning has made a remarkable contribution in the field of face recognition. In the image segmentation with high complexity, how to improve the segmentation and recognition accuracy is considered, how to improve the segmentation efficiency is considered, and the method simplicity is important. Therefore, a new accurate and efficient fatigue strip identification and segmentation method is urgently developed, the requirements of researchers in the field of materials are met, and the identification precision of fracture images on strips is effectively improved.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art, and provides a method for identifying and segmenting fatigue stripes of a metal material fracture based on deep learning, which can accurately identify and segment the fatigue stripes in fracture images.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: the metal material fracture fatigue strip identification and segmentation method based on deep learning comprises the following steps:
step 1: establishing a fracture fatigue strip mathematical model according to a fracture fatigue mechanism of a metal material;
(1) determining the relation between the fatigue stress amplitude and the fatigue strip interval according to the morphology mechanism of the fatigue strip;
when the fatigue crack propagation length is a when the number of stress cycles is N, the fatigue crack propagation amount μ at each load cycle is represented by the following formula:
μ=△S=da/dN
wherein, Δ S is the fatigue strip interval;
from Paris formula:
Figure BDA0003368971480000021
obtaining the relation between the fatigue stress amplitude and the fatigue strip interval, wherein the relation is shown in the following formula:
Figure BDA0003368971480000022
wherein C and m are metal material parameters, Y is a shape factor, and delta sigma is a fatigue stress amplitude;
(2) calculating the arc direction of the fatigue strip by a square difference accumulation method;
obtaining the arc direction of the fatigue strip according to the minimum value or the maximum value of the image gray difference of the fatigue strip in each direction;
setting the image gray differences of the fatigue strip along four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees as d、d45°、d90°、d135°Wherein:
d(x,y)=[I(x-1,y)-I(x+1,y)]2
d45°(x,y)=[I(x-1,y+1)-I(x+1,y-1)]2×0.5
d90°(x,y)=[I(x,y-1)-I(x,y+1)]2
d45°(x,y)=[I(x-1,y-1)-I(x+1,y+1)]2×0.5
wherein, I (x, y) is the image gray scale of the point (x, y) on the fatigue strip chart;
further, according to image gray level differences of the fatigue strip along four directions of 0 degrees, 45 degrees, 90 degrees and 135 degrees, calculating a mean square difference value of the image gray level of the fatigue strip along the four directions, wherein the mean square difference value of the image gray level of the four directions is the arc direction of the fatigue strip;
step 2: simulating the extension track of the fatigue strip by adopting an improved RRT algorithm, and obtaining a texture curve image of the fatigue strip by combining a mathematical model of the fracture fatigue strip;
step 2.1: defining the starting point, the end point and the number of sampling points of the fatigue strip arc, and setting the step length between each sampling point as t; defining the starting point and the end point of the fatigue strip arc line as a root node and a target node of a random tree, and defining obstacle position coordinates;
step 2.2: the distance and radian of the fatigue strips calculated by the mathematical model of the fatigue strips are used as constraints and set as obstacles of the RRT algorithm, and the extension direction of the next random point of the RRT algorithm is adjusted;
step 2.3: taking the generated new random point as a parent point, inputting a constraint parameter, and generating a downward sub-branch;
step 2.4: adding all new random points into a random tree set, and connecting all subtree branches into a trajectory line to obtain a final fatigue stripe texture curve;
and step 3: acquiring a fracture image by using a scanning electron microscope, marking a strip of the fracture image by using a labeling tool to obtain a labeled data sample, establishing a fatigue strip sample data set by combining a fatigue strip texture curve image obtained by using an improved RRT algorithm, and enhancing the data set;
and 4, step 4: carrying out denoising and geometric transformation preprocessing on the fatigue stripe sample data set picture;
step 4.1: removing random noise of the images in the data set by using Gaussian filtering to remove noise, and reducing noise interference in the images;
step 4.2: geometric transformation of the image is carried out by using a nearest neighbor interpolation method, and self errors of the instrument and random errors of the instrument position during fracture image acquisition are corrected;
and 5: building a fatigue strip recognition and segmentation model based on a neural network, and performing model training, parameter adjustment and optimization by using marked fatigue strip sample data;
the fatigue strip recognition model adopts a U-net network structure, the whole structure of the U-net network adopts a coding-decoding structure, and the coding-decoding structure comprises a down-sampling module, an up-sampling module and a jump link, so that end-to-end image segmentation can be realized; the down-sampling module comprises four layers of double convolution sub-blocks, the incoming data passes through each double convolution sub-block and then is down-sampled to extract features, and the down-sampling module aims to extract a feature map by continuously compressing the input image; the up-sampling module is structurally symmetrical to the down-sampling module, and also comprises four layers of double convolution sub-blocks, and a deconvolution device is added behind each layer of double convolution sub-block to restore the size of the characteristic diagram; the jump link part fuses the features obtained after convolution and the features obtained after deconvolution in the up-sampling module when down-sampling is prepared each time, and the features are spliced together in channel dimensions;
the U-net network structure forms a contraction path by using a multilayer double convolution structure and multiple downsampling, continuously downsampling to extract features, expanding a feature channel and obtaining a feature map; after the characteristic diagram is obtained, the multilayer double convolution structure is matched with the up-sampling for multiple times, and the size of the characteristic diagram is reduced;
the double convolution structure is formed by adding two convolution kernels and two linear rectification functions, and a maximum pooling layer is added after the double convolution structure for down-sampling, so that the accuracy of feature extraction is ensured; performing up-sampling on the characteristic diagram by using a deconvolution method to enlarge image information;
the U-net network structure adds a jump connection mode connection characteristic diagram in the first three layers of down-sampling layers and up-sampling layers, completes the combination of down-sampling depth information and up-sampling shallow information, completes the compensation of image missing information and restores image pixel information;
finally, adding a softmax function to perform characteristic classification in the U-net network structure to be subjected to the characteristic identification process;
the fatigue strip identification model selects a cross entropy function as a loss function;
step 6: inputting the fatigue stripe image to be identified and segmented into the trained fatigue stripe identification model, and performing stripe identification and segmentation on the fracture fatigue stripe image to finally obtain an end-to-end semantic segmentation result.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: according to the metal material fracture fatigue stripe recognition and segmentation method based on deep learning, the data set is enhanced through the improved RRT algorithm, the problem that the learning effect is not ideal due to the fact that the amount of network learnable data is too small in the segmentation task is effectively solved, and the number of samples is effectively increased. Through the network model for metal fracture fatigue stripe feature segmentation, the fatigue stripe feature region can be segmented from the complex fatigue fracture with high accuracy.
Drawings
Fig. 1 is a flowchart of a metal material fracture fatigue strip identification and segmentation method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the improved RRT algorithm for generating a strip trace according to an embodiment of the present invention;
FIG. 3 is a real fracture image collected by a scanning electron microscope and a camera according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a U-net network structure according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a dual convolution structure according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a deconvolution method provided by an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In this embodiment, the method for identifying and segmenting the metal material fracture fatigue strip based on deep learning, as shown in fig. 1, includes the following steps:
step 1: establishing a fracture fatigue strip mathematical model according to a fracture fatigue mechanism of a metal material;
(1) determining the relation between the fatigue stress amplitude and the fatigue strip interval according to the morphology mechanism of the fatigue strip;
when the fatigue crack propagation length is a when the number of stress cycles is N, the fatigue crack propagation amount μ at each load cycle is represented by the following formula:
Figure BDA0003368971480000051
wherein, Δ S is the fatigue strip interval;
from Paris formula:
Figure BDA0003368971480000052
obtaining the relation between the fatigue stress amplitude and the fatigue strip interval, wherein the relation is shown in the following formula:
Figure BDA0003368971480000053
wherein C and m are metal material parameters, Y is a shape factor, and delta sigma is a fatigue stress amplitude;
according to the formula, when different stress amplitudes are set, corresponding fatigue strip intervals can be obtained, and the corresponding strip intervals can be used for the next track generation work along with the set change of the stress amplitudes.
(2) Calculating the arc direction of the fatigue strip (namely the approximate direction of the fatigue strip) by a square error accumulation method;
the basic idea is as follows: for the fatigue histogram, the difference in image data along the tangential direction of the strip is the smallest, and the difference in image data along the normal direction of the strip is the largest. Therefore, the arc direction of the fatigue strip is obtained according to the minimum value or the maximum value of the image gray difference of each direction of the fatigue strip;
setting the image gray differences of the fatigue strip along four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees as d、d45°、d90°、d135°Wherein:
d(x,y)=[I(x-1,y)-I(x+1,y)]2
d45°(x,y)=[I(x-1,y+1)-I(x+1,y-1)]2×0.5
d90°(x,y)=[I(x,y-1)-I(x,y+1)]2
d45°(x,y)=[I(x-1,y-1)-I(x+1,y+1)]2×0.5
wherein, I (x, y) is the image gray scale of the point (x, y) on the fatigue strip chart;
further, according to image gray level differences of the fatigue strip along four directions of 0 degrees, 45 degrees, 90 degrees and 135 degrees, calculating a mean square difference value of the image gray level of the fatigue strip along the four directions, wherein the mean square difference value of the image gray level of the four directions is the arc direction of the fatigue strip;
step 2: simulating the extension track of the fatigue strip by adopting an improved-RRT algorithm, and obtaining a texture curve image of the fatigue strip by combining a mathematical model of the fracture fatigue strip;
the RRT is a traditional path planning mode, has a simple principle, can be used regardless of the degree of freedom and the complexity of constraint, and is one of the main reasons for the popularity of the RRT in the field of robots. The basic steps of RRT are:
1. the starting point is used as a seed, and branches grow from the beginning;
2. generating a random point in space;
3. finding the point closest to the tree and marking as new;
4. growing towards new, adding the grown branches and end points to the tree if no obstacle is encountered.
Random points are generally uniformly distributed, so that branches grow approximately uniformly in all directions when no obstacles exist, space can be rapidly searched, if area information of a most probable path finding is mastered, the area can be intensively searched, and uniform distribution is not met at this time.
According to a series of excellent characteristics of the RRT algorithm, the RRT algorithm can be improved on the basis of the RRT algorithm to perform corresponding data set enhancement, and because the RRT algorithm planning is effective but random, the corresponding direction guidance is required to be given in the tree stretching process so that the tree can quickly and accurately complete the process from a starting point to a terminal point, and the algorithm can be accurately converged. The improvement of RRT should be comprehensively considered in three aspects of a random sampling mode, a nearest point definition mode and a tree expansion mode. Here we propose an improved RRT algorithm. The above problems can be perfectly solved by establishing the fatigue strip mathematical model, after each parameter in the fatigue strip mathematical model is obtained, the corresponding parameter can be set as the constraint and obstacle in the fatigue strip extending process, namely the RRT algorithm advancing process, and the path extending direction can be obtained by assuming the constrained position. And determining a corresponding random point according to the step length set by the algorithm, and continuously generating a new branch. Through constraint giving of a plurality of steps, a series of random points which accord with the mechanism rule of the fatigue strip can be obtained, and finally the whole path of the fatigue strip, namely the texture curve image of the fatigue strip, is obtained, and the specific implementation process is as follows:
step 2.1: initializing a RRT algorithm working environment, defining a starting point, an end point and a sampling point number of a fatigue strip arc, and setting a step length between each sampling point as t; defining the starting point and the end point of the fatigue strip arc line as a root node and a target node of a random tree, and defining obstacle position coordinates;
step 2.2: the distance and radian of the fatigue strips calculated by the mathematical model of the fatigue strips are used as constraints and set as obstacles of the RRT algorithm, and the extension direction of the next random point of the RRT algorithm is adjusted;
step 2.3: taking the generated new random point as a parent point, inputting a constraint parameter, and generating a downward sub-branch;
step 2.4: adding all new random points into a random tree set, and connecting all subtree branches into a trajectory line to obtain a final fatigue stripe texture curve; the fatigue stripe texture curve obtained by improving the RRT algorithm in the embodiment is shown in FIG. 2.
And step 3: acquiring a fracture image by using a scanning electron microscope, marking a strip of the fracture image by using a labeling tool to obtain a labeled data sample, establishing a fatigue strip sample data set by combining a fatigue strip curve image obtained by using an improved RRT algorithm, and enhancing the data set;
in this embodiment, a specific method for acquiring a fracture image by using a scanning electron microscope is as follows: preparing a proper fatigue tensile sample, performing a tensile test on the fatigue tensile sample by using a fatigue testing machine to obtain a fracture sample, and then acquiring fracture images by using a scanning electron microscope;
the data set enhancement also needs the support of fracture pictures, so that the real sample and the artificial data sample are combined to form a data set, the effect is better when the neural network is trained, and the fracture pictures are necessary to be acquired. The scanning electron microscope for collecting the fracture pictures in the embodiment is very suitable for the research on the fractures because the scanning electron microscope has the characteristics of high resolution, large depth of field, capability of directly observing a sample and the like. The scanning electron microscope has the other important characteristic that the change range of the magnification factor is wide, and fracture pictures with different magnification factors can be collected. The picture of fatigue fracture obtained in this example is shown in fig. 3.
Regarding the preparation of fatigue fracture samples, an international three-point bending sample preparation method is generally selected, when a fracture is formed after a test piece is fractured, if various lubricating oil is left on the fracture, an oxidation layer and corrosion products can interfere the effect of a picture, so that the treatment is necessary, and in order to reduce the influence of the lubricating oil, ultrasonic cleaning can be performed in an organic solvent, and then repeated tearing and tearing can be performed. If a fracture oxide layer which is difficult to remove is encountered, a chemical formula can be used for carrying out the rust removal operation. And after the fracture sample is prepared, scanning by a scanning electron microscope and collecting pictures to obtain fracture pictures.
Since the training of the neural network needs the prominent features to make the training effect more excellent, the fracture picture needs to be preprocessed by labeling. And (4) dividing the shape of the strip in the fracture by using a labeling mode. In the embodiment, via is utilized to label images, firstly, fracture pictures needing to be labeled are imported, labeling operation of a strip area is carried out on the basis of understanding the fatigue strip mechanism, and strip characteristics are distinguished from other characteristics in the edge area of the strip by using a series of multi-segment lines to make a mark. And obtaining the accurately marked fracture picture. And then importing the marked fracture picture into a fatigue strip curve graph obtained by improving the RRT method, and fusing the fracture picture and the fatigue strip curve graph to obtain an enhanced data set.
And 4, step 4: carrying out denoising and geometric transformation preprocessing on the fatigue stripe sample data set picture;
step 4.1: removing random noise of the images in the data set by using Gaussian filtering to remove noise, reducing noise interference in the images and obtaining sample images with high recognition degree; in the process of acquiring the fatigue fracture image, the picture is difficult to avoid and can be interfered by noise due to the irresistible reason. Firstly, preprocessing a fracture image, and mainly eliminating random noise in the image by using a Gaussian filtering method so that the identification degree of the image is higher.
Step 4.2: the geometric transformation of the image is also called as image space transformation, and the acquired image is processed through the geometric transformations of translation, transposition, mirror image, rotation, scaling and the like, the geometric transformation of the image is performed by using a nearest neighbor interpolation method, and the self error of an instrument and the random error of the position (imaging angle, perspective relation and even lens self reason) of the instrument during fracture image acquisition are corrected;
and 5: building a fatigue strip recognition and segmentation model based on a neural network, and performing model training, parameter adjustment and optimization by using marked fatigue strip sample data;
the fatigue strip recognition model adopts a U-net network structure, the whole structure of the U-net network adopts a coding-decoding structure, and the coding-decoding structure comprises a down-sampling module, an up-sampling module and a jump link, so that end-to-end image segmentation can be realized; the down-sampling module comprises four layers of double convolution sub-blocks, the incoming data passes through each double convolution sub-block and then is down-sampled to extract features, and the down-sampling module aims to extract a feature map (feature map) by continuously compressing the input image; the up-sampling module is structurally symmetrical to the down-sampling module, and also comprises four layers of double convolution sub-blocks, a deconvolution is added behind each layer of double convolution sub-block to restore the size of feature map, and the design significance of the whole up-sampling module is to restore the feature map to the size of the original image. The jump link part fuses the features obtained after convolution and features obtained through deconvolution in an up-sampling module when down-sampling is prepared each time, and the features are spliced together in channel dimensions, so that global features and local features are better combined, and the segmentation accuracy is improved;
the U-net network is modified and expanded based on a full convolutional neural network (FCN), and compared with the FCN, the U-net network is improved on a conventional expanded network, and the specific network structure of the U-net network is shown in FIG. 4, wherein a pooling operation is replaced by an upsampling operation, so that the resolution of output is improved. In order to better compensate for missing image information, depth features obtained from a shrinkage network are combined with shallow features in upsampling, so that continuous convolutional layers can be output more accurately. In a U-net network, there are a large number of feature channels that allow the network to propagate range information to the high resolution layer.
The U-net network structure forms a contraction path by using a multilayer double convolution structure and multiple downsampling, continuously downsampling to extract features, expanding a feature channel and obtaining a feature map; after the feature map is obtained, the feature map is subjected to multiple times of upsampling by using a multilayer double convolution structure, the size of the feature map is reduced, and the size of the final output image is the same as that of the original input image; the double convolution structure is shown in fig. 5, and is formed by a structure of adding two convolution kernels and two linear rectification functions, and a maximum pooling layer is added after the double convolution structure for down-sampling, so that the accuracy of feature extraction is ensured; performing up-sampling on the characteristic diagram by using a deconvolution method to enlarge image information;
in this embodiment, the systolic path contains 2 3 × 3 convolutions, each convolution is followed by a ReLU, a maximum pooling layer (step size 2) is used for downsampling, and each downsampling reduces the number of eigen-channels by half. The extended path consists of several upsampling (2 x 2 deconvolution), the schematic of which is shown in fig. 6, whose main role is to reduce the number of eigen-channels by half while restoring the size of the image. The output image is subjected to a 1 x 1 convolution to map a feature vector having 64 elements to a class label.
The U-net network structure adds a jump connection mode connection characteristic diagram in the first three layers of down-sampling layers and up-sampling layers, completes the combination of down-sampling depth information and up-sampling shallow information, completes the compensation of image missing information and restores image pixel information;
finally, adding a softmax function to perform characteristic classification in the U-net network structure to be subjected to the characteristic identification process;
the softmax function formula is as follows:
Figure BDA0003368971480000091
wherein, ak(x) Expressing the score of the characteristic channel (K) corresponding to each pixel point (x), wherein K is the number of classes, pk(x) Is the result of classification of pixel point x for class k.
The fatigue strip recognition model selects a cross entropy function as a loss function, and the definition formula is as follows:
E=∑x∈Ωω(x)log(Pl(x)(x))
where E represents the loss function, ω (x) represents the weight mapping (empirically introduced weight scores), l (x) represents the true label of pixel x on the image, Ω is the set of pixel locations of the entire image, Pl(x) The classification result of the pixel point x is finally aimed at maximizing the segmentation precision and controlling the probability of other class characteristics, namely, the loss is minimized;
in this embodiment, the weight of each real sample is pre-calculated to compensate for different frequencies of pixels of a certain class in the training data set, so as to guide the network to learn the separation boundary that we introduce in the stripe region.
Figure BDA0003368971480000092
Where ω (x) represents the weight mapping mentioned above, ωc(x) Is a weight of the balance class proportion, omega0Represents the initial empirical weight, d1Is the distance from a certain pixel to its nearest stripe, d2Is its distance, σ, to the strip next closest thereto2Denotes d1And d2The variance of the two variables.
Weight is firstThe initialization is important, here the weights are initialized using a Gaussian distribution with a standard deviation of
Figure BDA0003368971480000093
Where N denotes the number of input nodes of a neuron, e.g. for a 3 × 3 convolution, the previous layer has 64 eigen-channels, and then N ═ 3 × 3 × 64 ═ 576.
The U-net is a relatively simple network among a plurality of networks, is easy to apply and improve in practical use, has a great improvement space, and can carry out the transformation in the framework according to different requirements, so the U-net has great practicability on image segmentation.
Step 6: inputting the fatigue stripe image to be identified and segmented into the trained fatigue stripe identification model, and performing stripe identification and segmentation on the fracture fatigue stripe image to finally obtain an end-to-end semantic segmentation result. The end-to-end semantic segmentation result is obtained, namely, the network input is a complete picture, and the output is also a complete visual picture instead of a feature map, namely, the input and the output are consistent.
After the fatigue strip recognition model is trained, the U-net network can recognize the strips according to the learned characteristics. More specifically, a partial fracture picture is selected as a test set to be input into the neural network for picture segmentation, and after a function convergence process is completed by a computer, the U-net network can accurately identify the stripe characteristics and accurately segment the stripe region.
U-net is adapted to perform semantic segmentation tasks, standard semantic segmentation also known as full-pixel semantic segmentation, which is a process of classifying each pixel as belonging to an object class. In the fracture image, according to the classification of pixel level, the strip characteristics can be distinguished from the shapes such as holes, dimples, tearing edges and the like, and the corresponding division of different types of shapes can be obtained.
Semantic segmentation is more suitable for extracting target features from complex morphology. More specifically, semantic segmentation is similar to processing classification label data, and pixel-level onehot encoding is applied to the predicted classification targets, i.e., creating an output channel for each classification category.
In summary, the task of semantic segmentation is to process an input image through a deep learning algorithm to obtain an output image with the same size and semantic labels.
Assuming that the input image size is W × W, the convolution kernel size is F × F, the stride is S, P layers are added to the outermost periphery of the convolutional layer, and the image size output after passing through the convolutional layer is N × N, the following formula is given:
Figure BDA0003368971480000101
for pooling operation, let the input image size be W × H, the convolution kernel size be F × F, S be the step size, and the output image size after pooling:
Figure BDA0003368971480000102
finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (8)

1. A metal material fracture fatigue strip identification and segmentation method based on deep learning is characterized by comprising the following steps: the method comprises the following steps:
step 1: establishing a fracture fatigue strip mathematical model according to a fracture fatigue mechanism of a metal material;
step 2: simulating the extension track of the fatigue strip, and combining a mathematical model of the fracture fatigue strip to obtain a texture curve image of the fatigue strip;
and step 3: acquiring a fracture image by using a scanning electron microscope, marking the strip of the fracture image by using a labeling tool to obtain a labeled data sample, and establishing a fatigue strip sample data set by combining the fatigue strip texture curve image obtained in the step 2 to enhance the data set;
and 4, step 4: carrying out denoising and geometric transformation preprocessing on the fatigue stripe sample data set picture;
and 5: building a fatigue strip recognition and segmentation model based on a neural network, and performing model training, parameter adjustment and optimization by using marked fatigue strip sample data;
step 6: inputting the fatigue stripe image to be identified and segmented into the trained fatigue stripe identification model, and performing stripe identification and segmentation on the fracture fatigue stripe image to finally obtain an end-to-end semantic segmentation result.
2. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 1, wherein: the specific method of the step 1 comprises the following steps:
(1) determining the relation between the fatigue stress amplitude and the fatigue strip interval according to the morphology mechanism of the fatigue strip;
when the fatigue crack propagation length is a when the number of stress cycles is N, the fatigue crack propagation amount μ at each load cycle is represented by the following formula:
μ=ΔS=da/dN
wherein Δ S is the fatigue strip spacing;
from Paris formula:
Figure FDA0003368971470000011
obtaining the relation between the fatigue stress amplitude and the fatigue strip interval, wherein the relation is shown in the following formula:
Figure FDA0003368971470000012
wherein C and m are metal material parameters, Y is a shape factor, and delta sigma is a fatigue stress amplitude;
(2) calculating the arc direction of the fatigue strip by a square difference accumulation method;
obtaining the arc direction of the fatigue strip according to the minimum value or the maximum value of the image gray difference of the fatigue strip in each direction;
setting the image gray differences of the fatigue strip along four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees as d、d45°、d90°、d135°Wherein:
d(x,y)=[I(x-1,y)-I(x+1,y)]2
d45°(x,y)=[I(x-1,y+1)-I(x+1,y-1)]2×0.5
d90°(x,y)=[I(x,y-1)-I(x,y+1)]2
d45°(x,y)=[I(x-1,y-1)-I(x+1,y+1)]2×0.5
wherein, I (x, y) is the image gray scale of the point (x, y) on the fatigue strip chart;
and further, calculating the mean value of the mean values of the image gray scales of the fatigue strip along the four directions according to the image gray scale differences of the fatigue strip along the four directions of 0 degrees, 45 degrees, 90 degrees and 135 degrees, wherein the mean value of the mean values of the image gray scales of the four directions is the arc direction of the fatigue strip.
3. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 2, wherein: in the step 2, an improved-RRT algorithm is adopted to simulate the extension track of the fatigue strip, and a mathematical model of the fracture fatigue strip is combined to obtain a texture curve image of the fatigue strip, wherein the method specifically comprises the following steps:
step 2.1: defining the starting point, the end point and the number of sampling points of the fatigue strip arc, and setting the step length between each sampling point as t; defining the starting point and the end point of the fatigue strip arc line as a root node and a target node of a random tree, and defining obstacle position coordinates;
step 2.2: the distance and radian of the fatigue strips calculated by the mathematical model of the fatigue strips are used as constraints and set as obstacles of the RRT algorithm, and the extension direction of the next random point of the RRT algorithm is adjusted;
step 2.3: taking the generated new random point as a parent point, inputting a constraint parameter, and generating a downward sub-branch;
step 2.4: and adding all new random points into the random tree set, and connecting all subtree branches into a trajectory line to obtain a final fatigue stripe texture curve.
4. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 3, wherein: the specific method of the step 4 comprises the following steps:
step 4.1: removing random noise of the images in the data set by using Gaussian filtering to remove noise, and reducing noise interference in the images;
step 4.2: and (3) performing geometric transformation on the image by using a nearest neighbor interpolation method, and correcting the self error of the instrument and the random error of the position of the instrument when the fracture image is acquired.
5. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 4, wherein: step 5, the fatigue strip recognition model adopts a U-net network structure, the whole structure of the U-net network adopts a coding-decoding structure, and the coding-decoding structure comprises a down-sampling module, an up-sampling module and a jump link, so that end-to-end image segmentation can be realized; the down-sampling module comprises four layers of double convolution sub-blocks, the incoming data passes through each double convolution sub-block and then is down-sampled to extract features, and the down-sampling module aims to extract a feature map by continuously compressing the input image; the up-sampling module is structurally symmetrical to the down-sampling module, and also comprises four layers of double convolution sub-blocks, and a deconvolution device is added behind each layer of double convolution sub-block to restore the size of the characteristic diagram; and the jump linking part is used for fusing the features obtained after convolution and the features obtained by deconvolution in the up-sampling module when down-sampling is prepared each time, and splicing the features together in the channel dimension.
6. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 5, wherein: the U-net network structure forms a contraction path by using a multilayer double convolution structure and multiple downsampling, continuously downsampling to extract features, expanding a feature channel and obtaining a feature map; after the characteristic diagram is obtained, the multilayer double convolution structure is matched with the up-sampling for multiple times, and the size of the characteristic diagram is reduced;
the U-net network structure adds a jump connection mode connection characteristic diagram in the first three layers of down-sampling layers and up-sampling layers, completes the combination of down-sampling depth information and up-sampling shallow information, completes the compensation of image missing information and restores image pixel information;
and finally, adding a softmax function to perform feature classification in the U-net network structure in the process of identifying the features.
7. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 6, wherein: the double convolution structure is formed by adding two convolution kernels and two linear rectification functions, and a maximum pooling layer is added after the double convolution structure for down-sampling, so that the accuracy of feature extraction is ensured; and (4) performing up-sampling on the characteristic graph by using a deconvolution method to enlarge image information.
8. The metal material fracture fatigue strip identification and segmentation method based on deep learning of claim 5, wherein: the fatigue strip recognition model selects a cross entropy function as a loss function.
CN202111392841.8A 2021-11-23 2021-11-23 Metal material fracture fatigue strip identification and segmentation method based on deep learning Pending CN113935989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111392841.8A CN113935989A (en) 2021-11-23 2021-11-23 Metal material fracture fatigue strip identification and segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111392841.8A CN113935989A (en) 2021-11-23 2021-11-23 Metal material fracture fatigue strip identification and segmentation method based on deep learning

Publications (1)

Publication Number Publication Date
CN113935989A true CN113935989A (en) 2022-01-14

Family

ID=79287421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111392841.8A Pending CN113935989A (en) 2021-11-23 2021-11-23 Metal material fracture fatigue strip identification and segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN113935989A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152216A (en) * 2023-03-03 2023-05-23 北京理工大学 Preparation method and equipment for fatigue sample of protective material based on neural network
CN116403212A (en) * 2023-05-16 2023-07-07 西安石油大学 Method for identifying small particles in pixels of metallographic image based on improved U-net network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152216A (en) * 2023-03-03 2023-05-23 北京理工大学 Preparation method and equipment for fatigue sample of protective material based on neural network
CN116152216B (en) * 2023-03-03 2023-08-25 北京理工大学 Preparation method and equipment for fatigue sample of protective material based on neural network
CN116403212A (en) * 2023-05-16 2023-07-07 西安石油大学 Method for identifying small particles in pixels of metallographic image based on improved U-net network
CN116403212B (en) * 2023-05-16 2024-02-02 西安石油大学 Method for identifying small particles in pixels of metallographic image based on improved U-net network

Similar Documents

Publication Publication Date Title
Zhang et al. Intelligent acoustic-based fault diagnosis of roller bearings using a deep graph convolutional network
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN111307453B (en) Transmission system fault diagnosis method based on multi-information fusion
CN108230359B (en) Object detection method and apparatus, training method, electronic device, program, and medium
CN106940816B (en) CT image pulmonary nodule detection system based on 3D full convolution neural network
Yogesh et al. Computer vision based analysis and detection of defects in fruits causes due to nutrients deficiency
CN113935989A (en) Metal material fracture fatigue strip identification and segmentation method based on deep learning
CN106096547B (en) A kind of low-resolution face image feature super resolution ratio reconstruction method towards identification
WO2018180386A1 (en) Ultrasound imaging diagnosis assistance method and system
CN112837344B (en) Target tracking method for generating twin network based on condition countermeasure
CN113076920B (en) Intelligent fault diagnosis method based on asymmetric domain confrontation self-adaptive model
CN114841961B (en) Wheat scab detection method based on image enhancement and improved YOLOv5
CN113962951B (en) Training method and device for detecting segmentation model, and target detection method and device
CN112465820A (en) Semantic segmentation based rice disease detection method integrating global context information
CN116110042A (en) Tomato detection method based on CBAM attention mechanism of YOLOv7
CN115330697A (en) Tire flaw detection domain self-adaption method based on migratable Swin transducer
CN114372962A (en) Laparoscopic surgery stage identification method and system based on double-particle time convolution
CN113962905A (en) Single image rain removing method based on multi-stage feature complementary network
CN114821174B (en) Content perception-based transmission line aerial image data cleaning method
CN114612738B (en) Training method of cell electron microscope image segmentation model and organelle interaction analysis method
CN115909077A (en) Hyperspectral image change detection method based on unsupervised spectrum unmixing neural network
Lv Scale parameter recognition of blurred moving image based on edge combination algorithm
CN115145138B (en) Rapid processing method for sparse particle hologram
CN114359417B (en) Detection method for JPEG image compression quality factor
CN113470046B (en) Drawing meaning force network segmentation method for medical image super-pixel gray texture sampling characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination