CN116228641A - Micro fatigue crack length calculation method based on U-net network - Google Patents
Micro fatigue crack length calculation method based on U-net network Download PDFInfo
- Publication number
- CN116228641A CN116228641A CN202211605743.2A CN202211605743A CN116228641A CN 116228641 A CN116228641 A CN 116228641A CN 202211605743 A CN202211605743 A CN 202211605743A CN 116228641 A CN116228641 A CN 116228641A
- Authority
- CN
- China
- Prior art keywords
- layer
- module
- net network
- fatigue crack
- fretting fatigue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 12
- 238000011176 pooling Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 238000005070 sampling Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 230000007246 mechanism Effects 0.000 claims description 22
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 17
- 230000004913 activation Effects 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 abstract description 7
- 238000000034 method Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 13
- 125000004122 cyclic group Chemical group 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009661 fatigue test Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Investigating Strength Of Materials By Application Of Mechanical Stress (AREA)
Abstract
Description
技术领域technical field
本发明涉及裂缝检测技术领域,具体为基于U-net网络的微动疲劳裂纹长度计算方法。The invention relates to the technical field of crack detection, in particular to a method for calculating the length of a fretting fatigue crack based on a U-net network.
背景技术Background technique
机械部件在长期的高频微动工况下,局部易产生微动疲劳裂纹,且随着载荷和循环次数的增加,裂纹扩展速率会不断增加,早期微小的疲劳裂纹将继续发展形成显著裂缝,削弱部件局部刚度,甚至诱发局部破坏,威胁整体机械结构安全。因此,对微动疲劳裂纹的长度进行监测,有利于量化微动裂纹对设备性能的具体影响,这对于机械结构整体的安全寿命预测具有重要意义。Under long-term high-frequency fretting conditions, mechanical parts are prone to fretting fatigue cracks locally, and with the increase of load and cycle times, the crack growth rate will continue to increase, and the early tiny fatigue cracks will continue to develop and form significant cracks. Weaken the local stiffness of components, and even induce local damage, threatening the safety of the overall mechanical structure. Therefore, monitoring the length of fretting fatigue cracks is beneficial to quantify the specific impact of fretting cracks on equipment performance, which is of great significance for the safety life prediction of the overall mechanical structure.
现有技术中,公开号为CN111445446B的专利文献开了一种基于改进的U-net的混凝土表面裂缝检测方法,该方法使用的神经网络模板计算量相对较大,且只能获得长裂纹;公开号为CN113284107A的专利文献公开了一种引入注意力机制改进型U-net的混凝土裂缝实时检测方法,但该检测模型只能获取细微裂缝,无法获得细微裂缝的长度信息。In the prior art, the patent document with the publication number CN111445446B has developed a method for detecting cracks on the concrete surface based on improved U-net. The neural network template used in this method has a relatively large amount of calculation and can only obtain long cracks; The patent document No. CN113284107A discloses a real-time detection method for concrete cracks by introducing an attention mechanism improved U-net, but this detection model can only obtain fine cracks, but cannot obtain the length information of fine cracks.
总体而言,目前已有利用深度学习的方式来识别裂缝,但由于微动疲劳裂纹尺寸通常较为细微,一般在一百微米到几百微米之间,由于其长宽尺寸都极其微小,与试样表面无法产生明显区分,现有基于深度学习的裂缝检测技术无法有效识别微小疲劳裂纹,因此,需要发展适合发动机叶片榫头试样等微小疲劳裂纹的识别方法用于识别微小裂纹,从而实现微动疲劳裂纹的早期识别。Generally speaking, deep learning methods have been used to identify cracks. However, the size of fretting fatigue cracks is usually small, generally between one hundred microns and several hundred microns, and their length and width are extremely small. Therefore, it is necessary to develop an identification method suitable for small fatigue cracks such as engine blade tenon samples to identify small cracks, so as to realize fretting. Early identification of fatigue cracks.
发明内容Contents of the invention
鉴于上述问题,本发明的目的在于提供一种基于U-net网络的微动疲劳裂纹长度计算方法,适用于监测发动机叶片榫头试样等部件不同循环次数的微动中所产生的微动疲劳裂纹长度。In view of the above problems, the object of the present invention is to provide a method for calculating the length of fretting fatigue cracks based on the U-net network, which is suitable for monitoring the fretting fatigue cracks produced in the fretting of parts such as engine blade tenon samples with different cycle times length.
本发明的技术方案如下:Technical scheme of the present invention is as follows:
一种基于U-net网络的微动疲劳裂纹长度计算方法,包括如下步骤:A method for calculating the length of a fretting fatigue crack based on a U-net network, comprising the following steps:
S1、用某类微动疲劳裂纹原始图像训练优化后的U-net网络得到训练后的U-net网络模型,具体包括如下步骤:S1. Use the original image of a certain type of fretting fatigue crack to train the optimized U-net network to obtain the trained U-net network model, which specifically includes the following steps:
S11、获取某类微动疲劳裂纹的原始图像,由于不同设备、不同运动方式产生的微动裂纹并不相同,因此,要求获取的原始图像是同类设备采取同种运动方式获得的微动疲劳裂纹,比如均为发动机叶片榫头试样受到循环应力产生的微动疲劳裂纹、比如均为钢丝受到循环应力产生的微动疲劳裂纹,比如均为高速列车车轴受到循环应力产生的疲劳裂纹。S11. Obtain the original image of a certain type of fretting fatigue crack. Since the fretting cracks produced by different equipment and different movement methods are not the same, the original image required to be acquired is the fretting fatigue crack obtained by the same kind of equipment using the same movement method , such as the fretting fatigue cracks of the engine blade tenon sample subjected to cyclic stress, such as the fretting fatigue cracks of steel wires subjected to cyclic stress, such as the fatigue cracks of high-speed train axles subjected to cyclic stress.
S12、对所述原始图像中的裂纹区域进行逐像素标注使得裂纹区域形成封闭图形,具体的标准软件有多种,比如LabelMe、labelimg和rolabelimg。S12. Label the crack area in the original image pixel by pixel so that the crack area forms a closed figure. There are many specific standard software, such as LabelMe, labelimg and rolabelimg.
S13、使用图像分割套件将标注后的原始图像批量整理为完整的数据集格式并作为优化后的U-net模型的训练集、测试集、验证集,训练优化后的U-net模型得到训练后的U-net网络模型。此步骤的图像分割套件不限于PaddleSeg和vox数据集。S13, use the image segmentation suite to batch-organize the marked original images into a complete data set format and use it as the training set, test set, and verification set of the optimized U-net model, and train the optimized U-net model after training The U-net network model. The image segmentation suite for this step is not limited to the PaddleSeg and vox datasets.
S2、获取与步骤S1同类的微动疲劳裂缝图像作为实验图像;采用训练后的U-net网络模型对实验图像进行分割检测识别实验图像中的微动疲劳裂缝得到分割检测结果图。S2. Obtain an image of fretting fatigue cracks similar to step S1 as an experimental image; use the trained U-net network model to segment and detect the experimental image to identify fretting fatigue cracks in the experimental image to obtain a segmentation detection result map.
S3、测量分割检测结果图中微动疲劳裂纹的长度,具体的测量方式有多种,本发明的提供了一种具体的测量方法,包括如下步骤:S3, measuring the length of the fretting fatigue crack in the segmented detection result graph, there are multiple specific measurement methods, and the present invention provides a specific measurement method, including the following steps:
S31、通过中轴线算法在分割检测结果图中生成微动疲劳裂纹的中轴线;S31. Generate the central axis of the fretting fatigue crack in the segmented detection result graph by the central axis algorithm;
S32、计算中轴线的长度即得到微动疲劳裂纹的长度:对微动疲劳裂纹中轴线进行像素点个数统计,并根据分割检测结果图的分辨率尺寸,由像素点总数经过单位换算后得到微动疲劳裂纹的中轴线的总长度。S32. Calculate the length of the central axis to obtain the length of the fretting fatigue crack: count the number of pixels on the central axis of the fretting fatigue crack, and according to the resolution size of the segmentation test result map, obtain the total number of pixels after unit conversion The total length of the central axis of the fretting fatigue crack.
在U-net网络结构中,下采样会降低像素被正确标记的概率,U-Net网络的上采样对恢复特征信息的能力有限,使得裂纹的宽度信息不明显,无法检测到细小裂纹的现象,本发明专利提出一种优化后的U-net网络,其将最大池化层、上采样层和卷积核为小尺寸的深度可分离卷积进行相加融合,构建一种全新的模块扩大分辨率,扩展网络深度来增大感受野,通过多尺度连接维持感受野与分辨率平衡的状态。在最大感受野的前提下,保持最大分辨率,从而实现对细小裂纹的检测,具体而言,所述优化后的U-net网络包括。In the U-net network structure, downsampling will reduce the probability of pixels being correctly marked, and the upsampling of the U-Net network has limited ability to recover feature information, making the crack width information not obvious, and the phenomenon of small cracks cannot be detected. The patent of the present invention proposes an optimized U-net network, which adds and fuses the maximum pooling layer, upsampling layer, and convolution kernel into small-sized depth-separable convolutions to construct a brand-new module expansion resolution rate, expand the depth of the network to increase the receptive field, and maintain the balance between the receptive field and resolution through multi-scale connections. On the premise of the maximum receptive field, the maximum resolution is maintained, so as to realize the detection of small cracks. Specifically, the optimized U-net network includes.
优化后的U-net网络包括上采样模块、Concatenate操作、下采样模块和位于上采样模块底部、下采样模块底部之间的层融合模块;The optimized U-net network includes an upsampling module, a Concatenate operation, a downsampling module, and a layer fusion module between the bottom of the upsampling module and the bottom of the downsampling module;
层融合模块包括依次连接的第一卷积池化层组和融合层;第一卷积池化层组包括并联的1个深度可分离卷积模块、4个第一卷积池化层,每个第一卷积池化层包括步长为2的最大池化层、1×1的深度可分离卷积模块和2×2的上采样层;下采样模块输出的特征图输入第一卷积池化层组中,融合层对输入的图像特征进行相加融合且融合层输出的特征图输入上采样模块中;本U-net网络采用非线性激活函数ReLU对每一层的输出结果进行处理,能够提高网络的非线性表达,减少卷积过程中的梯度消失现象。The layer fusion module includes the first convolution pooling layer group and the fusion layer connected in sequence; the first convolution pooling layer group includes a parallel depth separable convolution module and 4 first convolution pooling layers, each The first convolutional pooling layer includes a maximum pooling layer with a stride of 2, a 1×1 depth-separable convolution module, and a 2×2 upsampling layer; the feature map output by the downsampling module is input into the first convolution In the pooling layer group, the fusion layer adds and fuses the input image features and the feature map output by the fusion layer is input into the upsampling module; the U-net network uses the nonlinear activation function ReLU to process the output of each layer , which can improve the nonlinear expression of the network and reduce the gradient disappearance phenomenon in the convolution process.
U-net网络的特征向量的深度为2048,上采样模块、下采样模块均包括五个依次连接的第二卷积层组,每个第二卷积层组均包括依次连接的残差块、注意力机制模块和2个转置卷积层;The depth of the feature vector of the U-net network is 2048, and the up-sampling module and the down-sampling module include five second convolutional layer groups connected in sequence, and each second convolutional layer group includes sequentially connected residual blocks, Attention mechanism module and 2 transposed convolutional layers;
残差块包括深度可分离卷积、转置卷积层和BN层,残差块的输入通过skipconnection与残差块卷积过后的输出在相加层融合;The residual block includes depthwise separable convolution, transposed convolutional layer and BN layer. The input of the residual block is fused with the output of the convolution of the residual block through skipconnection in the addition layer;
注意力机制模块包括编码块和解码块,编码块包括最大池化层和卷积层,解码块包括上采样层和卷积层;将残差块输出的特征信息输入到注意力机制中,软注意力机制在训练时会过度参数化,注意力机制模块的输入通过一个skipconnection与注意力机制模块中Sigmoid函数分类后的输出进行融合,两者融合后作为一个输出,这解决软注意力机制中冗余的参数计算问题,对卷积神经网络进行了优化。The attention mechanism module includes an encoding block and a decoding block, the encoding block includes a maximum pooling layer and a convolutional layer, and the decoding block includes an upsampling layer and a convolutional layer; the feature information output by the residual block is input into the attention mechanism, and the soft The attention mechanism will be over-parameterized during training. The input of the attention mechanism module is fused with the output of the Sigmoid function classification in the attention mechanism module through a skipconnection, and the two are fused as one output, which solves the problem in the soft attention mechanism. Redundant parameter computation problem, optimized for convolutional neural networks.
Concatenate操作将上采样模块产生的特征图与下采样的特征图用np.concatennate函数进行拼接,Concatenate操作中在非线性激活函数ReLU之前添加BN层,缓解上一层的输入分布向非线性函数的两端缓慢靠近,BN层对输入数据做一个N(0,1)的正态分布的归一化处理,最后输入到激活函数ReLU,在反向传播中可以产生更明显的梯度,有效帮助网络进行收敛以此改善梯度弥散的现象。The Concatenate operation splices the feature map generated by the upsampling module and the downsampled feature map with the np.concatennate function. In the Concatenate operation, a BN layer is added before the nonlinear activation function ReLU to ease the input distribution of the previous layer to the nonlinear function. The two ends are slowly approaching, and the BN layer normalizes the input data with a normal distribution of N (0, 1), and finally inputs it to the activation function ReLU, which can generate more obvious gradients in backpropagation, effectively helping the network Convergence is performed to improve the phenomenon of gradient dispersion.
有益效果:Beneficial effect:
(1)本发明的方法能够对微小的疲劳裂纹进行准确识别的识别方法并测定裂缝的长度。(1) The method of the present invention is an identification method capable of accurately identifying tiny fatigue cracks and measuring the length of the cracks.
(2)本发明的U-net网络结构中在上采样模块、下采样模块之间设置依次连接的第一卷积池化层组、融合层,能够减少计算量,提高网络速度,同时扩大特征图捕捉局部信息和细节信息,从而能够准确识别微小的微动疲劳裂纹。此外,通过层融合来拼接通道维度,可以减少网络模型的训练时间、扩大特征图、增大分辨率。在第一卷积池化层组中设置深度可分离卷积模块,能够有效减少参数计算量,同时设置该可分离卷积模块的卷积核大小为1×1,能够缩小通道数目。(2) In the U-net network structure of the present invention, the first convolution pooling layer group and the fusion layer connected in sequence are arranged between the up-sampling module and the down-sampling module, which can reduce the amount of calculation, improve the network speed, and expand the features at the same time The map captures local information and detailed information, so that tiny fretting fatigue cracks can be accurately identified. In addition, splicing the channel dimension through layer fusion can reduce the training time of the network model, expand the feature map, and increase the resolution. Setting the depth separable convolution module in the first convolution pooling layer group can effectively reduce the amount of parameter calculation, and setting the convolution kernel size of the separable convolution module to 1×1 can reduce the number of channels.
附图说明Description of drawings
图1为本发明实施例1例优化后的U-net网络的流程框图;Fig. 1 is the block flow diagram of the optimized U-net network of 1 example of the embodiment of the present invention;
图2是本发明实施例1层融合模块的流程框图;Fig. 2 is a block flow diagram of a layer 1 fusion module according to an embodiment of the present invention;
图3是本发明实施例1残差块的流程框图;FIG. 3 is a flow diagram of a residual block in Embodiment 1 of the present invention;
图4是本发明实施例1注意力机制模块的流程框图;Fig. 4 is a flow diagram of the attention mechanism module of Embodiment 1 of the present invention;
图5是本发明实施例1实验图像的裂纹长度计算流程图。Fig. 5 is a flow chart of calculating the crack length of the experimental image of Example 1 of the present invention.
具体实施方式Detailed ways
为了对本发明的技术特征、目的和有益效果有更加清楚的理解,结合附图对本发明的一个实施例作进一步描述。实施例只用于对本发明进行进一步的说明,不能理解为对本发明保护范围的限制,本领域的技术人员根据本发明的内容做出的一些非本质的改进和调整也属于本发明保护的范围。In order to have a clearer understanding of the technical features, purpose and beneficial effects of the present invention, an embodiment of the present invention will be further described in conjunction with the accompanying drawings. The embodiments are only used to further illustrate the present invention, and cannot be interpreted as limiting the protection scope of the present invention. Some non-essential improvements and adjustments made by those skilled in the art according to the content of the present invention also belong to the protection scope of the present invention.
实施例1Example 1
本实施例以发动机叶片榫头通过循环振动产生的微动疲劳裂缝为例说明本发明计算微动疲劳裂纹长度的方法,其具体包括如下步骤:In this embodiment, the fretting fatigue crack generated by the tenon of the engine blade through cyclic vibration is taken as an example to illustrate the method for calculating the length of the fretting fatigue crack of the present invention, which specifically includes the following steps:
S1、用发动机叶片榫头通过循环振动产生的微动疲劳裂纹原始图像训练优化后的U-net网络得到训练后的U-net网络模型,具体包括如下步骤:S1. Train the optimized U-net network with the original image of the fretting fatigue crack generated by the cyclic vibration of the engine blade tenon to obtain the trained U-net network model, which specifically includes the following steps:
S11、获取发动机叶片榫头通过循环振动产生的微动疲劳裂纹原始图像:为了能够获得清晰地的微动疲劳裂纹原始图像,本实施例采用了中机GPS100高频疲劳试验机,并在该装置的基础上在榫试样的侧面放置了数字显微镜对榫试样两侧的裂纹扩展情况进行监测,并收集图像数据。S11. Obtain the original image of the fretting fatigue crack generated by the cyclic vibration of the tenon of the engine blade: In order to obtain a clear original image of the fretting fatigue crack, this embodiment adopts the Zhongji GPS100 high-frequency fatigue testing machine, and in the device Basically, a digital microscope was placed on the side of the tenon sample to monitor the crack growth on both sides of the tenon sample, and image data was collected.
S12、对原始图像中的裂纹区域用LabelMe软件逐像素标注裂纹,使得裂纹区域形成封闭图形。S12. Use the LabelMe software to mark the cracks pixel by pixel in the crack area in the original image, so that the crack area forms a closed figure.
S13、使用图像分割套件PaddleSeg将标注后的原始图像批量整理为完整的数据集格式并作为优化后的U-net模型的训练集、测试集、验证集,训练优化后的U-net模型得到训练后的U-net网络模型。S13. Use the image segmentation suite PaddleSeg to batch-organize the marked original images into a complete data set format and use them as the training set, test set, and verification set of the optimized U-net model, and train the optimized U-net model to get training The following U-net network model.
在U-net网络结构中,下采样会降低像素被正确标记的概率,U-Net网络的上采样对恢复特征信息的能力有限,使得裂纹的宽度信息不明显,无法检测到细小裂纹的现象,本发明专利提出一种优化后的U-net网络,其将最大池化层、上采样层和卷积核为小尺寸的深度可分离卷积进行相加融合,构建一种全新的模块扩大分辨率,扩展网络深度来增大感受野,通过多尺度连接维持感受野与分辨率平衡的状态。在最大感受野的前提下,保持最大分辨率,从而实现对细小裂纹的检测,具体而言,所述优化后的U-net网络包括。In the U-net network structure, downsampling will reduce the probability of pixels being correctly marked, and the upsampling of the U-Net network has limited ability to recover feature information, making the crack width information not obvious, and the phenomenon of small cracks cannot be detected. The patent of the present invention proposes an optimized U-net network, which adds and fuses the maximum pooling layer, upsampling layer, and convolution kernel into small-sized depth-separable convolutions to construct a brand-new module expansion resolution rate, expand the depth of the network to increase the receptive field, and maintain the balance between the receptive field and resolution through multi-scale connections. On the premise of the maximum receptive field, the maximum resolution is maintained, so as to realize the detection of small cracks. Specifically, the optimized U-net network includes.
图1是本实施例优化后的U-net网络的流程框图,优化后的U-net网络包括上采样模块、Concatenate操作、下采样模块和位于上采样模块底部、下采样模块底部之间的层融合模块。Fig. 1 is the flowchart of the optimized U-net network of the present embodiment, the optimized U-net network includes an upsampling module, a Concatenate operation, a downsampling module and a layer between the bottom of the upsampling module and the bottom of the downsampling module Fusion module.
图2是层融合模块的流程框图,层融合模块包括依次连接的第一卷积池化层组和融合层;第一卷积池化层组包括并联的1个深度可分离卷积模块、4个第一卷积池化层,每个第一卷积池化层包括步长为2的最大池化层、1×1的深度可分离卷积模块和2×2的上采样层;下采样模块输出的特征图输入第一卷积池化层组中,融合层对输入的图像特征进行相加融合且融合层输出的特征图输入上采样模块中;本U-net网络采用非线性激活函数ReLU对每一层的输出结果进行处理,能够提高网络的非线性表达,减少卷积过程中的梯度消失现象。Figure 2 is a flow diagram of the layer fusion module, the layer fusion module includes the first convolution pooling layer group and the fusion layer connected in sequence; the first convolution pooling layer group includes a parallel depth separable convolution module, 4 A first convolutional pooling layer, each first convolutional pooling layer includes a maximum pooling layer with a stride of 2, a depthwise separable convolution module of 1×1, and an upsampling layer of 2×2; downsampling The feature map output by the module is input into the first convolution pooling layer group, the fusion layer adds and fuses the input image features and the feature map output by the fusion layer is input into the upsampling module; this U-net network uses a nonlinear activation function ReLU processes the output of each layer, which can improve the nonlinear expression of the network and reduce the gradient disappearance in the convolution process.
本U-net网络的特征向量的深度为2048,请继续参考图2,上采样模块、下采样模块均包括五个依次连接的第二卷积层组,每个第二卷积层组均包括依次连接的残差块、注意力机制模块和2个转置卷积层。The depth of the feature vector of this U-net network is 2048. Please continue to refer to Figure 2. Both the up-sampling module and the down-sampling module include five second convolutional layer groups connected in sequence, and each second convolutional layer group includes A sequentially connected residual block, attention mechanism module, and 2 transposed convolutional layers.
请参考图3,图3是残差块的流程框图,残差块包括深度可分离卷积、转置卷积层和BN层,残差块的输入通过skipconnection与残差块卷积过后的输出在相加层融合。Please refer to Figure 3. Figure 3 is a flowchart of the residual block. The residual block includes a depthwise separable convolution, a transposed convolution layer, and a BN layer. The input of the residual block is convolved with the output of the residual block through skipconnection. Fusion in additive layer.
请参考图4,图4是注意力机制模块的流程框图,注意力机制模块包括编码块和解码块,编码块包括最大池化层和卷积层,解码块包括上采样层和卷积层;将残差块输出的特征信息输入到注意力机制中,注意力机制在训练时会过度参数化,注意力机制模块的输入通过一个skipconnection与注意力机制模块中Sigmoid函数分类后的输出进行融合,两者融合后形成一个输出,这解决软注意力机制中冗余的参数计算问题,对卷积神经网络进行了优化。Please refer to Figure 4, Figure 4 is a flow diagram of the attention mechanism module, the attention mechanism module includes an encoding block and a decoding block, the encoding block includes a maximum pooling layer and a convolutional layer, and the decoding block includes an upsampling layer and a convolutional layer; The feature information output by the residual block is input into the attention mechanism. The attention mechanism will be over-parameterized during training. The input of the attention mechanism module is fused with the output of the Sigmoid function classification in the attention mechanism module through a skipconnection. The two are fused to form an output, which solves the redundant parameter calculation problem in the soft attention mechanism and optimizes the convolutional neural network.
Concatenate操作将上采样模块产生的特征图与下采样的特征图用np.concatennate函数进行拼接,Concatenate操作中在非线性激活函数ReLU之前添加BN层,缓解上一层的输入分布向非线性函数的两端缓慢靠近,BN层对输入数据做一个N(0,1)的正态分布的归一化处理,最后输入到激活函数ReLU的值,在反向传播中可以产生更明显的梯度,有效帮助网络进行收敛以此改善梯度弥散的现象。The Concatenate operation splices the feature map generated by the upsampling module and the downsampled feature map with the np.concatennate function. In the Concatenate operation, a BN layer is added before the nonlinear activation function ReLU to ease the input distribution of the previous layer to the nonlinear function. The two ends approach slowly, and the BN layer normalizes the input data with a normal distribution of N (0, 1), and finally inputs the value of the activation function ReLU, which can produce a more obvious gradient in backpropagation, effectively Help the network to converge to improve the phenomenon of gradient dispersion.
本实施例在上采样模块、下采样模块之间设置依次连接的第一卷积池化层组、融合层,能够减少计算量,提高网络速度,同时扩大特征图捕捉局部信息和细节信息。此外,通过层融合来拼接通道维度,可以减少网络模型的训练时间、扩大特征图、增大分辨率。在第一卷积池化层组中设置深度可分离卷积模块,能够有效减少参数计算量,同时设置该可分离卷积模块的卷积核大小为1×1,能够缩小通道数目。In this embodiment, the first convolutional pooling layer group and the fusion layer connected sequentially between the up-sampling module and the down-sampling module can reduce the amount of calculation, improve the network speed, and expand the feature map to capture local information and detailed information. In addition, splicing the channel dimension through layer fusion can reduce the training time of the network model, expand the feature map, and increase the resolution. Setting the depth separable convolution module in the first convolution pooling layer group can effectively reduce the amount of parameter calculation, and setting the convolution kernel size of the separable convolution module to 1×1 can reduce the number of channels.
S2、获取与步骤S1同类的微动疲劳裂缝图像作为实验图像;采用训练后的U-net网络模型对实验图像进行分割检测识别实验图像中的微动疲劳裂缝得到分割检测结果图;采用中轴线算法获取裂纹分割结果图中的中轴线;采用非零元素统计函数对中轴线包含的像素点个数进行统计,具体检测识别流程如图5所示。S2. Obtain the fretting fatigue crack image similar to step S1 as the experimental image; use the trained U-net network model to segment and detect the experimental image to identify the fretting fatigue crack in the experimental image to obtain the segmentation detection result map; use the central axis The algorithm obtains the central axis in the crack segmentation result graph; the non-zero element statistical function is used to count the number of pixels contained in the central axis, and the specific detection and identification process is shown in Figure 5.
S3、测量分割检测结果图中微动疲劳裂纹的长度,具体包括如下步骤:S3. Measuring the length of the fretting fatigue crack in the segmented test result graph, which specifically includes the following steps:
S31、通过中轴线算法在分割检测结果图中生成微动疲劳裂纹的中轴线。S31. Generate the central axis of the fretting fatigue crack in the segmented detection result graph by using the central axis algorithm.
S32、计算中轴线的长度即得到微动疲劳裂纹的长度:对微动疲劳裂纹中轴线进行像素点个数统计,并根据分割检测结果图的分辨率尺寸,由像素点总数经过单位换算后得到微动疲劳裂纹的中轴线的总长度。比如,统计得到某图像中某条裂缝中轴线的像素点个数为80个,而该图像的对角线的象素个数为1000,对角线的长度为43cm,也就是说,在这种情况下,1000px=43cm,即1px=0.43mm,那中轴线的长度为:80×0.43mm=34.4mm。S32. Calculate the length of the central axis to obtain the length of the fretting fatigue crack: count the number of pixels on the central axis of the fretting fatigue crack, and according to the resolution size of the segmentation test result map, obtain the total number of pixels after unit conversion The total length of the central axis of the fretting fatigue crack. For example, the number of pixels on the central axis of a crack in an image is 80 according to the statistics, and the number of pixels on the diagonal of the image is 1000, and the length of the diagonal is 43 cm. That is to say, in this In this case, 1000px=43cm, that is, 1px=0.43mm, the length of the central axis is: 80×0.43mm=34.4mm.
以上对本发明的有关内容进行了说明,本领域普通技术人员在基于这些说明的情况下将能够实现本发明。基于本发明的上述内容,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都应当属于本发明保护的范围。The relevant content of the present invention has been described above, and those skilled in the art will be able to realize the present invention based on these descriptions. Based on the above content of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211605743.2A CN116228641A (en) | 2022-12-14 | 2022-12-14 | Micro fatigue crack length calculation method based on U-net network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211605743.2A CN116228641A (en) | 2022-12-14 | 2022-12-14 | Micro fatigue crack length calculation method based on U-net network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116228641A true CN116228641A (en) | 2023-06-06 |
Family
ID=86584901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211605743.2A Pending CN116228641A (en) | 2022-12-14 | 2022-12-14 | Micro fatigue crack length calculation method based on U-net network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116228641A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117710348A (en) * | 2023-12-21 | 2024-03-15 | 广州恒沙云科技有限公司 | Pavement crack detection method and system based on location information and attention mechanism |
-
2022
- 2022-12-14 CN CN202211605743.2A patent/CN116228641A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117710348A (en) * | 2023-12-21 | 2024-03-15 | 广州恒沙云科技有限公司 | Pavement crack detection method and system based on location information and attention mechanism |
CN117710348B (en) * | 2023-12-21 | 2024-06-11 | 广州恒沙云科技有限公司 | Pavement crack detection method and system based on position information and attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111681240B (en) | A bridge surface crack detection method based on YOLO v3 and attention mechanism | |
CN112488025B (en) | Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion | |
CN115294038A (en) | A Defect Detection Method Based on Joint Optimization and Hybrid Attention Feature Fusion | |
CN110046550B (en) | Pedestrian attribute recognition system and method based on multi-layer feature learning | |
CN110211045A (en) | Super-resolution face image method based on SRGAN network | |
CN113240671B (en) | Water turbine runner blade defect detection method based on YoloV4-Lite network | |
US11435719B2 (en) | System and method for identifying manufacturing defects | |
CN114359283A (en) | Defect detection method based on Transformer and electronic equipment | |
CN114022770A (en) | Mountain crack detection method based on improved self-attention mechanism and transfer learning | |
CN111507998B (en) | A Deep Cascade-Based Multiscale Excitation Mechanism Tunnel Surface Defect Segmentation Method | |
CN111932511A (en) | Electronic component quality detection method and system based on deep learning | |
CN109784183A (en) | Saliency object detection method based on concatenated convolutional network and light stream | |
CN113393438A (en) | Resin lens defect detection method based on convolutional neural network | |
CN117540779A (en) | A lightweight metal surface defect detection method based on dual-source knowledge distillation | |
CN111753873A (en) | An image detection method and device | |
CN113077444A (en) | CNN-based ultrasonic nondestructive detection image defect classification method | |
CN113657532A (en) | Motor magnetic shoe defect classification method | |
CN115937651A (en) | Cylindrical roller surface detection method and system based on improved yolov5s network model | |
CN115376003A (en) | Pavement Crack Segmentation Method Based on U-Net Network and CBAM Attention Mechanism | |
CN109753906B (en) | An abnormal behavior detection method in public places based on domain transfer | |
CN116228641A (en) | Micro fatigue crack length calculation method based on U-net network | |
CN113516652A (en) | Battery surface defect and adhesive detection method, device, medium and electronic equipment | |
CN116596881A (en) | Workpiece surface defect detection method based on CNN and transducer | |
CN115631186B (en) | Industrial element surface defect detection method based on double-branch neural network | |
CN116166966A (en) | Water quality degradation event detection method based on multi-mode data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |