CN115661071A - Detection and evaluation method of surface defects in composite material processing based on deep learning - Google Patents
Detection and evaluation method of surface defects in composite material processing based on deep learning Download PDFInfo
- Publication number
- CN115661071A CN115661071A CN202211311389.2A CN202211311389A CN115661071A CN 115661071 A CN115661071 A CN 115661071A CN 202211311389 A CN202211311389 A CN 202211311389A CN 115661071 A CN115661071 A CN 115661071A
- Authority
- CN
- China
- Prior art keywords
- defect
- evaluation
- training
- model
- composite material
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 171
- 238000011156 evaluation Methods 0.000 title claims abstract description 61
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 239000002131 composite material Substances 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 title claims abstract description 28
- 238000013135 deep learning Methods 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 59
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012360 testing method Methods 0.000 claims abstract description 37
- 238000013507 mapping Methods 0.000 claims abstract description 16
- 238000013136 deep learning model Methods 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims abstract description 10
- 238000002372 labelling Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 10
- 238000012854 evaluation process Methods 0.000 claims description 6
- 230000000704 physical effect Effects 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000007797 corrosion Effects 0.000 claims description 4
- 238000005260 corrosion Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 4
- 238000003064 k means clustering Methods 0.000 claims description 3
- 230000003647 oxidation Effects 0.000 claims description 3
- 238000007254 oxidation reaction Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims 3
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 claims 1
- 229960004134 propofol Drugs 0.000 claims 1
- 239000000463 material Substances 0.000 abstract description 7
- 239000011208 reinforced composite material Substances 0.000 abstract description 6
- 238000011158 quantitative evaluation Methods 0.000 abstract description 4
- 238000003754 machining Methods 0.000 abstract description 2
- 238000007689 inspection Methods 0.000 description 10
- 238000000605 extraction Methods 0.000 description 5
- 229910052782 aluminium Inorganic materials 0.000 description 3
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 3
- HBMJWWWQQXIZIP-UHFFFAOYSA-N silicon carbide Chemical compound [Si+]#[C-] HBMJWWWQQXIZIP-UHFFFAOYSA-N 0.000 description 3
- 229910010271 silicon carbide Inorganic materials 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000003913 materials processing Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 239000003562 lightweight material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Image Analysis (AREA)
Abstract
本发明属于精密加工检测相关技术领域,并公开了一种基于深度学习的复合材料加工表面缺陷检测及评价方法,包括:(1)拍摄缺陷图像,并对图像进行数据标记;(2)划分训练集和测试集,并对数据集进行数据增强处理;(3)将训练集输入深度学习模型进行训练;(4)将测试集输入已训练好的模型,得到图像中缺陷的类别、位置、区域面积及模型评价指标;(5)形成评价指标对评价分数的映射。本发明还公开了相应的系统。通过本发明,可实现高准确率、高效率的颗粒增强型复合材料加工表面缺陷检测,实现对检测出的缺陷进行量化评价,能有效指导选取最优工艺参数,因而尤其适用于颗粒增强型复合材料加工表面缺陷检测及评价的应用场合。
The invention belongs to the technical field related to precision machining detection, and discloses a method for detecting and evaluating surface defects of composite material processing based on deep learning, including: (1) taking defect images and performing data marking on the images; (2) dividing training (3) Input the training set into the deep learning model for training; (4) Input the test set into the trained model to obtain the category, location, and area of the defect in the image Area and model evaluation index; (5) Form the mapping of evaluation index to evaluation score. The invention also discloses a corresponding system. Through the present invention, high-accuracy and high-efficiency particle-reinforced composite material processing surface defect detection can be realized, the quantitative evaluation of detected defects can be realized, and the selection of optimal process parameters can be effectively guided, so it is especially suitable for particle-reinforced composite materials. Application occasions for detection and evaluation of surface defects in material processing.
Description
技术领域technical field
本发明属于精密加工检测相关技术领域,更具体地,涉及一种基于深度学习的复合材料加工表面缺陷检测及评价方法。The invention belongs to the technical field related to precision machining detection, and more specifically relates to a method for detecting and evaluating surface defects of composite material processing based on deep learning.
背景技术Background technique
随着航天器在空间及临近空间应用,各种系统的核心光学元件的结构体积、重量和制造精度越来越受到重视,航天器高性能光学元件的轻质化和超精密制造是保证航天装备优质综合性能的必由之路。单一材料总是不能满足工业领域中某些需求,所以能够满足这些需求的颗粒增强型复合材料被广泛应用。With the application of spacecraft in space and near space, the structural volume, weight and manufacturing accuracy of the core optical components of various systems are getting more and more attention. The only way to high-quality comprehensive performance. A single material cannot always meet certain needs in the industrial field, so particle-reinforced composite materials that can meet these needs are widely used.
复合材料热膨胀系数小,具有高强度和高刚度以及耐磨损、耐腐蚀、耐高温等优异性能,是一种理想的新型轻质材料。而复合材料因为自身金属基体与增强相物理、力学性能差异大,导致在加工过程中极易形成各种形式的缺陷,常有犁沟、裂纹、颗粒破碎、凸起、界面剥落等,会严重影响材料性能。Composite materials have small thermal expansion coefficient, high strength and high rigidity, and excellent properties such as wear resistance, corrosion resistance, and high temperature resistance. They are an ideal new lightweight material. However, due to the large difference in physical and mechanical properties between the metal matrix and the reinforced phase of the composite material, it is easy to form various forms of defects during the processing process, such as furrows, cracks, particle breakage, protrusions, and interface peeling, etc., which will cause serious damage. affect material properties.
现有技术中,用于颗粒增强型材料加工表面缺陷检测的方法主要有传统人工肉眼检测、超声波检测、X射线检测、高频脉冲涡流检测等。其中,人工检测偏向于主观判断,同时发现较小缺陷,效率和准确度都很低;超声波检测是目前复合材料较为普遍和广泛使用的检测技术,但其缺陷显示不够直观对缺陷定性定量较为困难,需要耦合剂且主要适用于内部缺陷检测;涡流检测要求材料本身具有导电性,且需要专业人员分析判断。这些方法大多需要人工或者半人工的方式进行缺陷判断,导致效率很低。In the prior art, the methods for surface defect detection of particle-reinforced material processing mainly include traditional artificial naked eye inspection, ultrasonic inspection, X-ray inspection, high-frequency pulsed eddy current inspection, and the like. Among them, manual inspection is biased towards subjective judgment, and small defects are found at the same time, so the efficiency and accuracy are very low; ultrasonic inspection is currently a relatively common and widely used inspection technology for composite materials, but its defect display is not intuitive enough, and it is difficult to identify and quantify defects , requires couplant and is mainly suitable for internal defect detection; eddy current testing requires the material itself to be conductive, and requires professional analysis and judgment. Most of these methods require manual or semi-manual methods for defect judgment, resulting in low efficiency.
相应地,本领域亟需对此做出进一步的研究改进,以便更好地满足复合材料加工表面的高精度高效率检测及量化评价需求。Correspondingly, there is an urgent need for further research and improvement in this field in order to better meet the needs of high-precision and high-efficiency detection and quantitative evaluation of composite material processing surfaces.
发明内容Contents of the invention
针对现有技术的以上缺陷或需求,本发明的目的在于提供一种基于深度学习的复合材料加工表面缺陷检测及评价方法,其中通过充分考虑复合材料加工表面缺陷的相关特性,选择深度学习算法并针对性设计训练、测试及评价等操作,与现有技术相比能够进一步提升模型对于复合材料加工表面缺陷的检测能力,并获得高准确性、高效率的加工表面缺陷检测和更为全面量化的评价结果。In view of the above defects or requirements of the prior art, the purpose of the present invention is to provide a method for detecting and evaluating surface defects of composite materials processing based on deep learning, wherein by fully considering the relevant characteristics of surface defects of composite materials processing, the deep learning algorithm is selected and Targeted design training, testing, and evaluation operations can further improve the model's ability to detect surface defects in composite materials compared with existing technologies, and obtain high-accuracy, high-efficiency detection of surface defects in processing and more comprehensive and quantitative results. Evaluation results.
为实现上述目的,按照本发明的一个方面,提供了一种基于深度学习的复合材料加工表面缺陷检测及评价方法,其特征在于,该方法包括:In order to achieve the above object, according to one aspect of the present invention, a method for detecting and evaluating surface defects in composite material processing based on deep learning is provided, characterized in that the method includes:
步骤一、图像获取和标注Step 1. Image acquisition and labeling
拍摄复合材料加工表面的图像,将获得的图像汇总成图像集并执行缺陷标注,由此形成图像数据集;Take images of the processed surface of composite materials, collect the obtained images into an image set and perform defect annotation, thereby forming an image data set;
步骤二、图像数据集划分及数据增强Step 2. Image dataset division and data enhancement
将步骤一所形成的图像数据集划分成训练集和测试集,分别用来训练模型和测试模型,同时对划分好的训练集进行图像数据增强处理,以扩大训练集的规模;Divide the image data set formed in step 1 into a training set and a test set, which are used to train the model and test the model respectively, and simultaneously perform image data enhancement processing on the divided training set to expand the scale of the training set;
步骤三、缺陷检测模型训练
将步骤二得到的训练集输入到深度学习模型中,进行模型训练;Input the training set obtained in step 2 into the deep learning model for model training;
步骤四、缺陷检测模型测试Step 4. Defect detection model testing
将步骤二得到的测试集输入到已训练好的深度学习模型中进行测试,得到图像中的缺陷类别、缺陷位置、缺陷深度、缺陷面积、缺陷面积占比、缺陷区域最小外接矩形长宽等信息,从而获取对应的缺陷评价指标值;Input the test set obtained in step 2 into the trained deep learning model for testing, and obtain information such as defect category, defect location, defect depth, defect area, defect area ratio, length and width of the smallest circumscribed rectangle in the image, etc. , so as to obtain the corresponding defect evaluation index value;
步骤五、缺陷评价Step 5. Defect Evaluation
基于步骤四获取的缺陷评价指标值,并结合预设的缺陷评价分数准则,形成评价指标与评价分数之间的映射关系,由此完成整个的检测及评价过程。Based on the defect evaluation index value obtained in step 4, combined with the preset defect evaluation score criterion, the mapping relationship between the evaluation index and the evaluation score is formed, thereby completing the entire detection and evaluation process.
作为进一步优选地,在步骤一中,所获得的图像数量优选不小于800张,每种缺陷相应数量优选不少于200张;所述缺陷标注优选包括:缺陷种类、缺陷区域框的坐标信息、缺陷实例边界点,缺陷实例面积等。As a further preference, in step 1, the number of images obtained is preferably not less than 800, and the corresponding number of each defect is preferably not less than 200; the defect annotation preferably includes: defect type, coordinate information of defect area frame, Defect instance boundary point, defect instance area, etc.
作为进一步优选地,在步骤二中,所述训练集和测试集的划分比例优选为8:2,并且可采用旋转、缩放、剪切、Mosaic、CutMix等方法来完成所述训练集的图像数据增强处理。As further preferably, in step 2, the division ratio of the training set and the test set is preferably 8:2, and methods such as rotation, scaling, shearing, Mosaic, CutMix, etc. can be used to complete the image data of the training set Enhanced processing.
作为进一步优选地,对于所述深度学习模型而言,其优选设定如下:它的backbone使用ResNet网络进行特征提取,结合FPN网络进行不同大小特征图输出;通过RPN网络生成proposals,Fast-RCNN网络对RPN生成的proposals进行类别预测与位置微调;Mask分支生成全部类别的Mask,并提取出预测类别相应的Mask。As a further preference, for the deep learning model, its preferred settings are as follows: its backbone uses the ResNet network for feature extraction, combined with the FPN network to output feature maps of different sizes; generates proposals through the RPN network, Fast-RCNN network Perform category prediction and position fine-tuning on the proposals generated by RPN; the Mask branch generates Masks of all categories, and extracts the Mask corresponding to the predicted category.
作为进一步优选地,在步骤三中,所述模型训练的过程优选设计如下:As further preferably, in
对所述训练集的缺陷标注框进行K-means聚类,得到合适的anchor尺寸;通过backbone和FPN网络提取不同层次的特征图,通过RPN网络得到proposals,再将其映射回相应层次特征图上得到proposals特征图,其对应关系为:Carry out K-means clustering on the defect annotation frame of the training set to obtain the appropriate anchor size; extract feature maps of different levels through the backbone and FPN network, obtain proposals through the RPN network, and then map them back to the corresponding level feature map Get the proposal feature map, and its corresponding relationship is:
其中,k0为w·h=S2所映射的层数,w、h分别为proposal的宽、高;Among them, k 0 is the number of layers mapped by w h = S 2 , and w and h are the width and height of the proposal respectively;
此外,通过RoIAlign将不同层次的proposals特征图转化为同一尺寸,再通过两个全连接层,最后经过两个并联的全连接层实现特征图的类别预测和proposal偏移量的预测,Mask分支在训练时输入目标是由RPN提供的proposals。In addition, the feature maps of different levels of proposals are converted into the same size through RoIAlign, and then through two fully connected layers, and finally through two parallel fully connected layers to realize the category prediction of the feature map and the prediction of the proposal offset. The Mask branch is in The input target during training is the proposals provided by RPN.
作为进一步优选地,在步骤三中,所述模型训练的损失优选包括RPN网络损失、Fast-RCNN损失、Mask损失,其中As further preferably, in
相关的RPN损失函数设计为:The related RPN loss function is designed as:
其中,Ncls为一张图片选取计算损失的候选框个数,pi为第i个anchor预测为正样本的概率,当为正样本时为1,负样本时为0,Nreg为anchor位置点个数,ti为预测第i个anchor对应回归参数,为第i个anchor对应GTBox的回归参数;Among them, N cls is the number of candidate frames selected to calculate the loss for a picture, p i is the probability that the i-th anchor is predicted to be a positive sample, When it is a positive sample, it is 1, when it is a negative sample, it is 0, N reg is the number of anchor position points, t i is the regression parameter corresponding to the predicted i-th anchor, It is the regression parameter corresponding to GTBox for the i-th anchor;
相关的Fast-RCNN损失函数设计为:The related Fast-RCNN loss function is designed as:
L(p,u,tu,v)=Lcls(p,u)+λ[u≥1]Lloc(tu,v)L(p,u,t u ,v)=L cls (p,u)+λ[u≥1]L loc (t u ,v)
其中,tu为预测对应类别u的回归参数,v对应真实目标的边界框回归参数;Among them, t u is the regression parameter for predicting the corresponding category u, and v corresponds to the bounding box regression parameter of the real target;
相关的Mask损失函数设计为:The related Mask loss function is designed as:
L(m,n)=LBCE(m,n)L(m,n)=L BCE (m,n)
其中,m为对应预测类别的Mask,n为GT Mask。Among them, m is the Mask corresponding to the predicted category, and n is the GT Mask.
作为进一步优选地,在步骤四中,所述模型测试的过程优选设计如下:As further preferably, in step 4, the process of the model testing is preferably designed as follows:
通过backbone和FPN得到多个特征图,通过RPN网络生成每个特征图相应的proposals,并将这些proposals映射到相应特征图上得到proposals特征图;接着,通过RoIAlign、两个全连接层、两个并联全连接层得到一个proposal对应的预测类别和相关偏移量,将Fast-RCNN网络输出的偏移后的proposals映射回特征图,通过RoIAlign改变尺寸后输入到Mask分支,选出目标预测类别对应的Mask映射回原图。Multiple feature maps are obtained through backbone and FPN, the corresponding proposals of each feature map are generated through the RPN network, and these proposals are mapped to the corresponding feature maps to obtain the proposal feature maps; then, through RoIAlign, two fully connected layers, two Connect the fully connected layer in parallel to obtain a prediction category and related offset corresponding to a proposal, map the offset proposals output by the Fast-RCNN network back to the feature map, change the size through RoIAlign, and input it to the Mask branch, and select the corresponding target prediction category The Mask is mapped back to the original image.
作为进一步优选地,在步骤四中,缺陷面积、缺陷区域最小外接矩形和缺陷种类优选可通过所述基于深度学习的目标检测算法获得,其中,缺陷面积优选可通过图中像素面积和比例尺换算得到,缺陷区域最小外接矩形优选可通过旋转卡壳算法获取。As a further preference, in step 4, the defect area, the minimum circumscribed rectangle of the defect area and the defect type can preferably be obtained through the target detection algorithm based on deep learning, wherein the defect area can preferably be obtained by converting the pixel area and scale in the figure , the minimum circumscribed rectangle of the defect area can preferably be obtained by the rotation jamming algorithm.
作为进一步优选地,在步骤五中,所述缺陷评价分数准则优选可选用力学性能、物理性能、化学性能和使用寿命等,其中力学性能进一步包括屈服强度、剪切模量、截面收缩率等,物理性能进一步包括电阻率、热导率、折射率等,化学性能进一步包括耐腐蚀性、抗氧化性等,并通过加权系数对这些参数进行占比调节。As a further preference, in step 5, the defect evaluation score criteria can preferably be selected from mechanical properties, physical properties, chemical properties, service life, etc., wherein the mechanical properties further include yield strength, shear modulus, area shrinkage, etc., The physical properties further include resistivity, thermal conductivity, refractive index, etc., and the chemical properties further include corrosion resistance, oxidation resistance, etc., and the proportion of these parameters is adjusted through weighting coefficients.
作为进一步优选地,在步骤五中,优选通过人工神经网络来建立所述评价指标与评价分数之间的映射关系,其中缺陷类别通过对不同种类缺陷的相应参数进行加权体现。As a further preference, in step five, the mapping relationship between the evaluation index and the evaluation score is preferably established through an artificial neural network, wherein the defect category is represented by weighting the corresponding parameters of different types of defects.
按照本发明的另一方面,还提供了相应的基于深度学习的复合材料加工表面缺陷检测及评价,其特征在于,该系统包括:According to another aspect of the present invention, there is also provided a corresponding deep learning-based composite material processing surface defect detection and evaluation, characterized in that the system includes:
图像获取和标注模块,该图像获取和标注模块用于拍摄复合材料加工表面的图像,将获得的图像汇总成图像集并执行缺陷标注,由此形成图像数据集;An image acquisition and labeling module, the image acquisition and labeling module is used to take images of the composite material processing surface, collect the obtained images into an image set and perform defect labeling, thereby forming an image data set;
图像数据集划分及数据增强模块,该图像数据集划分及数据增强模块用于将所形成的图像数据集划分成训练集和测试集,分别用来训练模型和测试模型,同时对划分好的训练集进行图像数据增强处理,以扩大训练集的规模;Image data set division and data enhancement module, the image data set division and data enhancement module is used to divide the formed image data set into training set and test set, which are respectively used for training model and test model, and at the same time, the divided training set The image data set is enhanced to expand the size of the training set;
缺陷检测模型训练模块,该缺陷检测模型训练模块用于将得到的训练集输入到深度学习模型中,进行模型训练;A defect detection model training module, the defect detection model training module is used to input the obtained training set into the deep learning model for model training;
缺陷检测模型测试模块,该缺陷检测模型测试模块用于将得到的测试集输入到已训练好的深度学习模型中进行测试,得到图像中的缺陷类别、缺陷位置、缺陷深度、缺陷面积、缺陷面积占比、缺陷区域最小外接矩形长宽等信息,从而获取对应的缺陷评价指标值;Defect detection model test module, the defect detection model test module is used to input the obtained test set into the trained deep learning model for testing, and obtain the defect category, defect position, defect depth, defect area and defect area in the image The proportion, the length and width of the minimum circumscribed rectangle of the defect area, etc., so as to obtain the corresponding defect evaluation index value;
缺陷评价模块,该缺陷评价模块用于基于所获取的缺陷评价指标值,并结合预设的缺陷评价分数准则,形成评价指标与评价分数之间的映射关系,由此完成整个的检测及评价过程。The defect evaluation module is used to form the mapping relationship between the evaluation index and the evaluation score based on the obtained defect evaluation index value and combined with the preset defect evaluation score criterion, thus completing the entire detection and evaluation process .
总体而言,通过本发明所构思的以上技术方案与现有技术相比,具有以下Generally speaking, compared with the prior art, the above technical solution conceived by the present invention has the following
有益效果:Beneficial effect:
(1)本发明相对于传统人工肉眼检测、超声波检测、X射线检测能有效提高缺陷识别效率和准确度,能避免传统方法中人工或者半人工带来的主观干扰;(1) Compared with traditional manual inspection with naked eyes, ultrasonic inspection and X-ray inspection, the present invention can effectively improve the efficiency and accuracy of defect identification, and can avoid subjective interference caused by manual or semi-manual methods in traditional methods;
(2)本发明通过大量缺陷样本及数据增强方式,能有效提高缺陷检测模型的泛化能力;(2) The present invention can effectively improve the generalization ability of the defect detection model through a large number of defect samples and data enhancement methods;
(3)本发明通过特征提取网络和FPN网络相结合的方式,能有效提高对不同尺寸缺陷的识别能力,降低缺陷漏检率;(3) The present invention can effectively improve the ability to identify defects of different sizes and reduce the rate of missed detection of defects through the combination of feature extraction network and FPN network;
(4)本发明还进一步提出了一种缺陷评价体系,能弥补缺陷的评判标准出自检测人员主观判断、没有一套很好的量化评价方法的问题,该方法来量化缺陷指导加工,能有效推动颗粒型复合材料加工向智能制造发展。(4) The present invention further proposes a defect evaluation system, which can make up for the problem that the defect evaluation standard comes from the subjective judgment of the inspectors, and there is no set of good quantitative evaluation methods. This method can be used to quantify defects to guide processing, which can effectively promote The processing of granular composite materials is developing towards intelligent manufacturing.
附图说明Description of drawings
图1是按照本发明的基于深度学习的复合材料加工表面缺陷检测及评价方法的整体流程图;Fig. 1 is the overall flowchart of the composite material processing surface defect detection and evaluation method based on deep learning according to the present invention;
图2是按照本发明的一个优选实施例、用于示范性显示缺陷检测模型的示意图;Fig. 2 is a schematic diagram for exemplary display of a defect detection model according to a preferred embodiment of the present invention;
图3是按照本发明的一个优选实施例、用于示范性显示结合了FPN的特征提取网络示意图;Fig. 3 is a schematic diagram of a feature extraction network for exemplary display combining FPN according to a preferred embodiment of the present invention;
图4是按照本发明的一个优选实施例、用于示范性显示缺陷评价阶段的流程示意图;Fig. 4 is a schematic flow chart for an exemplary display defect evaluation stage according to a preferred embodiment of the present invention;
图5是按照本发明的一个优选实施例的缺陷检测模型的实际输出示意图。Fig. 5 is a schematic diagram of the actual output of the defect detection model according to a preferred embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
图1是按照本发明的基于深度学习的复合材料加工表面缺陷检测及评价方法的整体流程图。下面将结合图1来更为具体地解释说明本发明。Fig. 1 is an overall flowchart of the method for detecting and evaluating surface defects of composite material processing based on deep learning according to the present invention. The present invention will be explained in more detail below with reference to FIG. 1 .
首先,是图像获取和标注步骤。First, there are image acquisition and labeling steps.
在此步骤中,拍摄复合材料加工表面的图像,将获得的图像汇总成图像集并执行缺陷标注,由此形成图像数据集。In this step, images of the processed surface of the composite material are taken, and the obtained images are assembled into an image set and defect annotation is performed, thereby forming an image data set.
更具体地,譬如可通过电子显微镜拍摄颗粒增强型复合材料加工表面图像,将获取的图像汇总成图像集,用合适的数据标注软件将所述图像集进行缺陷标注,形成图像数据集。More specifically, for example, images of the processed surface of particle-reinforced composite materials can be taken by an electron microscope, and the acquired images can be collected into an image set, and defects can be marked on the image set with appropriate data labeling software to form an image data set.
例如,可通过SEM获取1000张铝基碳化硅缺陷图片,同时每种缺陷相应数量不少于200张。通过labelme制作COCO数据集,标注缺陷种类、缺陷区域框信息(左上角坐标、高宽)、缺陷实例多边形边界点、缺陷面积。For example, 1,000 aluminum-based silicon carbide defect pictures can be obtained through SEM, and at the same time, the corresponding number of each defect is not less than 200. Create a COCO dataset through labelme, label the defect type, defect area frame information (upper left corner coordinates, height and width), defect instance polygon boundary points, and defect area.
接着,是图像数据集划分及数据增强步骤。Next, is the image data set division and data enhancement steps.
在此步骤中,将前面所形成的图像数据集划分成训练集和测试集,分别用来训练模型和测试模型,同时对划分好的训练集进行图像数据增强处理,以扩大训练集的规模。In this step, the previously formed image data set is divided into a training set and a test set, which are used to train the model and test the model respectively, and image data enhancement processing is performed on the divided training set to expand the scale of the training set.
更具体地,优选可通过旋转、缩放、翻转、CutMix等方法将图片数据增强到3000张,并将图片按照8:2的比例划分训练集和测试集,即2400张训练集图片和600张测试集图片。More specifically, it is preferable to enhance the picture data to 3000 by methods such as rotation, scaling, flipping, CutMix, etc., and divide the pictures into a training set and a test set according to a ratio of 8:2, that is, 2400 training set pictures and 600 test sets set pictures.
接着,是缺陷检测模型训练步骤。Next, is the defect detection model training step.
在此步骤中,将得到的训练集输入到深度学习模型中,进行模型训练。In this step, the obtained training set is input into the deep learning model for model training.
更具体地,本发明譬如可采用Mask-RCNN算法,如图2所示,本实施例中模型包含特征提取网络(ResNet50+FPN)、RPN网络、Fast-RCNN网络、Mask分支。如图3所示,使用含FPN的特征提取网络能有效检测出不同尺寸的缺陷。在训练过程中,首先需要对训练集缺陷标注框进行K-means聚类,得到合适的anchor尺寸,在对整个模型训练之前对backbone进行预训练,通过迁移学习解决数据量少、网络收敛慢的问题。在训练过程中,RPN网络的输出被Fast-RCNN网络和Mask分支使用,都将proposals根据尺寸映射回相应的特征图,其映射公式为:More specifically, the present invention can adopt the Mask-RCNN algorithm, for example, as shown in FIG. 2 , the model in this embodiment includes a feature extraction network (ResNet50+FPN), an RPN network, a Fast-RCNN network, and a Mask branch. As shown in Figure 3, using a feature extraction network with FPN can effectively detect defects of different sizes. In the training process, it is first necessary to carry out K-means clustering on the defect labeling boxes of the training set to obtain a suitable anchor size. Before training the entire model, pre-train the backbone, and solve the problem of small amount of data and slow network convergence through transfer learning. question. During the training process, the output of the RPN network is used by the Fast-RCNN network and the Mask branch, and the proposals are mapped back to the corresponding feature map according to the size. The mapping formula is:
其中,k0为w·h=S2所映射的层数,w、h分别为proposal的宽、高。Among them, k 0 is the number of layers mapped by w·h=S 2 , and w and h are the width and height of the proposal respectively.
两个分支对proposals特征图进行尺寸转换所使用的是两个不同的ROIAlign,需要分别训练相关的权重。The two branches use two different ROIAlign to convert the size of the proposal feature map, and the relevant weights need to be trained separately.
按照本发明的一个优选实施例,训练总损失包括RPN网络损失、Fast-RCNN损失、Mask损失。According to a preferred embodiment of the present invention, the total training loss includes RPN network loss, Fast-RCNN loss, and Mask loss.
RPN损失函数:RPN loss function:
其中,Ncls为一张图片选取计算损失的候选框个数,pi为第i个anchor预测为正样本的概率,当为正样本时为1,负样本时为0,Nreg为anchor位置点个数,ti为预测第i个anchor对应回归参数,为第i个anchor对应GTBox的回归参数。Among them, N cls is the number of candidate frames selected to calculate the loss for a picture, p i is the probability that the i-th anchor is predicted to be a positive sample, When it is a positive sample, it is 1, and when it is a negative sample, it is 0. N reg is the number of anchor position points, and t i is the regression parameter corresponding to the predicted i-th anchor. It is the regression parameter corresponding to GTBox for the i-th anchor.
Fast-RCNN损失函数:Fast-RCNN loss function:
L(p,u,tu,v)=LclS(p,u)+λ[u≥1]Lloc(tu,v)L(p,u,t u ,v)=L clS (p,u)+λ[u≥1]L loc (t u ,v)
其中,tu为预测对应类别u的回归参数,v对应真实目标的边界框回归参数。Among them, t u is the regression parameter for predicting the corresponding category u, and v corresponds to the bounding box regression parameter of the real object.
Mask损失函数:Mask loss function:
L(m,n)=LBCE(m,n)L(m,n)=L BCE (m,n)
其中,m为对应预测类别的Mask,n为GT Mask。Among them, m is the Mask corresponding to the predicted category, and n is the GT Mask.
此外,,模型训练环境可使用pytorch框架,硬件使用GeForce RTX 3080,采用带动量的SGD算法进行梯度更新,momentum参数设置为0.9,学习率为0.0004,权重衰减0.0001,总共迭代26个epoch,第16、22个epoch进行梯度0.1比率的学习率衰减。In addition, the model training environment can use the pytorch framework, the hardware uses GeForce RTX 3080, and the SGD algorithm with momentum is used for gradient update. The momentum parameter is set to 0.9, the learning rate is 0.0004, and the weight decay is 0.0001. A total of 26 epochs are iterated, and the 16th , 22 epochs carry out the learning rate decay of the gradient 0.1 ratio.
接着,是缺陷检测模型测试步骤。Next, is the defect detection model testing step.
在此步骤中,将前面得到的测试集输入到已训练好的深度学习模型中进行测试,得到图像中的缺陷类别、缺陷位置、缺陷深度、缺陷面积、缺陷面积占比、缺陷区域最小外接矩形长宽等信息,从而获取对应的缺陷评价指标值。In this step, input the test set obtained above into the trained deep learning model for testing, and obtain the defect category, defect position, defect depth, defect area, defect area ratio, and the smallest circumscribed rectangle of the defect area in the image. information such as length and width, so as to obtain the corresponding defect evaluation index value.
更具体地,可使用训练得到的权重,将测试集图片输入缺陷检测模型,测试流程如图3所示,先通过Fast-RCNN网络得到最终预测框和预测类别,再将其作为Mask分支的输入,得到该类别的相应的Mask,再映射回原图得到最终所需的缺陷类别、位置、面积等参数,目标检测和图像分割的模型评价指标选用mAP,mAP为所有类别的平均精度均值,mAP值越大代表模型的性能越好,若最终测试集mAP值符合要求,所训练的模型可以使用于实际缺陷图片检测。More specifically, the weights obtained from training can be used to input the test set pictures into the defect detection model. The test process is shown in Figure 3. First, the final prediction frame and prediction category are obtained through the Fast-RCNN network, and then used as the input of the Mask branch , to get the corresponding Mask of this category, and then map back to the original image to get the final required defect category, location, area and other parameters. The model evaluation index of target detection and image segmentation uses mAP, mAP is the average precision of all categories, mAP The larger the value, the better the performance of the model. If the mAP value of the final test set meets the requirements, the trained model can be used for actual defect picture detection.
最后,是缺陷评价步骤。Finally, there is the defect evaluation step.
在此步骤中,基于所获取的缺陷评价指标值,并结合预设的缺陷评价分数准则,形成评价指标与评价分数之间的映射关系,由此完成整个的检测及评价过程。In this step, based on the obtained defect evaluation index value and combined with the preset defect evaluation score criterion, a mapping relationship between the evaluation index and the evaluation score is formed, thereby completing the entire detection and evaluation process.
更具体地,本发明中缺陷评价包括选择缺陷评价指标和评价分数准则,形成评价指标对评价分数的映射。整个缺陷评价的过程如图4所示,其中,缺陷评价指标包括缺陷面积、缺陷面积占比、缺陷区域最小外接矩形长宽、缺陷种类、缺陷深度等。缺陷种类是由深度学习模型直接得到;缺陷面积、缺陷面积占比、缺陷区域最小外接矩形长宽等指标,由后处理算法对深度学习模型输出处理得到;缺陷深度等无法从二维图像中直接间接获得的指标,需通过其他测量途径获取。例如,缺陷面积可通过图中像素面积和比例尺换算得到,缺陷区域最小外接矩形可通过旋转卡壳算法获取,缺陷深度可通过SEM测量得到。More specifically, the defect evaluation in the present invention includes selecting defect evaluation indicators and evaluation score criteria to form a mapping of evaluation indicators to evaluation scores. The entire defect evaluation process is shown in Figure 4, where the defect evaluation indicators include defect area, defect area ratio, length and width of the smallest circumscribed rectangle of the defect area, defect type, defect depth, etc. The type of defect is directly obtained from the deep learning model; the defect area, the proportion of the defect area, the length and width of the smallest circumscribed rectangle of the defect area, etc., are obtained by the post-processing algorithm on the output of the deep learning model; the depth of the defect cannot be obtained directly from the two-dimensional image Indicators obtained indirectly need to be obtained through other measurement methods. For example, the defect area can be obtained by converting the pixel area and scale in the figure, the minimum circumscribed rectangle of the defect area can be obtained by the rotation jamming algorithm, and the defect depth can be obtained by SEM measurement.
评价分数评判准则可以选用力学性能(屈服强度、剪切模量、截面收缩率等)、物理性能(电阻率、热导率、折射率等)、化学性能(耐腐蚀性、抗氧化性等)、使用寿命等,参数可根据材料使用背景进行选择。选用多个参数时,考虑到不同参数的单位和数值量级不同需要进行归一化处理,还需通过加权系数对多个参数进行占比调节。例如,加工铝基碳化硅材料制作空间反射镜,反射性能、热导率、屈服强度等性能参数符合使用需求,将所选参数进行归一化处理到[0,1]区间,加权系数之和为1,即分数公式可表示为:Evaluation criteria can be selected from mechanical properties (yield strength, shear modulus, area shrinkage, etc.), physical properties (resistivity, thermal conductivity, refractive index, etc.), chemical properties (corrosion resistance, oxidation resistance, etc.) , service life, etc., the parameters can be selected according to the material use background. When selecting multiple parameters, considering the different units and numerical magnitudes of different parameters, it is necessary to perform normalization processing, and it is also necessary to adjust the proportion of multiple parameters through weighting coefficients. For example, aluminum-based silicon carbide materials are processed to make space mirrors. The performance parameters such as reflection performance, thermal conductivity, and yield strength meet the requirements of use. The selected parameters are normalized to the interval [0,1], and the sum of the weighting coefficients is 1, that is, the score formula can be expressed as:
score=w1a+w2b+w3cscore=w 1 a+w 2 b+w 3 c
其中,a、b、c为反射性能、热导率、屈服强度归一化处理后的值,w1、w2、w3为相应参数的加权系数,且w1+w2+w3=1。Among them, a, b, and c are the normalized values of reflection performance, thermal conductivity, and yield strength, w 1 , w 2 , and w 3 are the weighting coefficients of the corresponding parameters, and w 1 +w 2 +w 3 = 1.
按照本发明的另一优选实施例,评价指标与评价分数之间的映射关系通过人工神经网络来建立,即通过多层神经元和非线性激活函数通过大量数据自我学习之间的映射权重。缺陷种类通过对不同种类缺陷相应参数进行加权体现,例如铝基碳化硅的常见的颗粒破碎和划痕缺陷,在进行神经网络映射时,两个缺陷的相关参数是单独作为一个个输入神经元,表示缺陷种类的权重也是通过反向传播自我学习。According to another preferred embodiment of the present invention, the mapping relationship between the evaluation index and the evaluation score is established through an artificial neural network, that is, the mapping weight between multi-layer neurons and a nonlinear activation function through a large amount of data self-learning. The type of defect is reflected by weighting the corresponding parameters of different types of defects, such as the common particle breakage and scratch defects of aluminum-based silicon carbide. When performing neural network mapping, the relevant parameters of the two defects are individually used as input neurons. The weights representing defect categories are also self-learned via backpropagation.
综上,本发明可实现高准确率、高效率的颗粒增强型复合材料加工表面缺陷检测,实现对检测出的缺陷进行量化评价,能有效指导选取最优工艺参数,因而尤其适用于颗粒增强型复合材料表面缺陷检测及评价的应用场合。。In summary, the present invention can realize high-accuracy and high-efficiency surface defect detection of particle-reinforced composite materials, realize quantitative evaluation of detected defects, and can effectively guide the selection of optimal process parameters, so it is especially suitable for particle-reinforced composite materials. Application occasions for detection and evaluation of surface defects of composite materials. .
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。It is easy for those skilled in the art to understand that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, All should be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211311389.2A CN115661071A (en) | 2022-10-25 | 2022-10-25 | Detection and evaluation method of surface defects in composite material processing based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211311389.2A CN115661071A (en) | 2022-10-25 | 2022-10-25 | Detection and evaluation method of surface defects in composite material processing based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115661071A true CN115661071A (en) | 2023-01-31 |
Family
ID=84992256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211311389.2A Pending CN115661071A (en) | 2022-10-25 | 2022-10-25 | Detection and evaluation method of surface defects in composite material processing based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115661071A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173461A (en) * | 2023-08-29 | 2023-12-05 | 湖北盛林生物工程有限公司 | Multi-visual task filling container defect detection method, system and medium |
CN117274257A (en) * | 2023-11-21 | 2023-12-22 | 山西集智数据服务有限公司 | Automatic classification system for book looks based on machine vision |
-
2022
- 2022-10-25 CN CN202211311389.2A patent/CN115661071A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173461A (en) * | 2023-08-29 | 2023-12-05 | 湖北盛林生物工程有限公司 | Multi-visual task filling container defect detection method, system and medium |
CN117274257A (en) * | 2023-11-21 | 2023-12-22 | 山西集智数据服务有限公司 | Automatic classification system for book looks based on machine vision |
CN117274257B (en) * | 2023-11-21 | 2024-01-23 | 山西集智数据服务有限公司 | Automatic classification system for book looks based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211097B (en) | Crack image detection method based on fast R-CNN parameter migration | |
CN109523520B (en) | An automatic counting method of chromosomes based on deep learning | |
CN110321815A (en) | A kind of crack on road recognition methods based on deep learning | |
CN110222681A (en) | A kind of casting defect recognition methods based on convolutional neural networks | |
CN110660052A (en) | A deep learning-based detection method for surface defects of hot-rolled strip steel | |
CN104766097B (en) | Surface of aluminum plate defect classification method based on BP neural network and SVMs | |
CN115661071A (en) | Detection and evaluation method of surface defects in composite material processing based on deep learning | |
CN107832801B (en) | Model construction method for cell image classification | |
CN114372955A (en) | Casting defect X-ray diagram automatic identification method based on improved neural network | |
CN116740051A (en) | Steel surface defect detection method based on improved YOLO model | |
CN117291913B (en) | Apparent crack measuring method for hydraulic concrete structure | |
CN112560895A (en) | Bridge crack detection method based on improved PSPNet network | |
CN113095265A (en) | Fungal target detection method based on feature fusion and attention | |
CN115810191A (en) | Pathological cell classification method based on multi-attention fusion and high-precision segmentation network | |
Zhang et al. | Intelligent defect detection method for additive manufactured lattice structures based on a modified YOLOv3 model | |
CN114049538A (en) | Airport crack image confrontation generation method based on UDWGAN++ network | |
Xue et al. | A high efficiency deep learning method for the x-ray image defect detection of casting parts | |
CN114897802A (en) | A Metal Surface Defect Detection Method Based on Improved Faster RCNN Algorithm | |
Xin et al. | A fine extraction algorithm for image-based surface cracks in underwater dams | |
CN113012167B (en) | A joint segmentation method of nucleus and cytoplasm | |
CN118154993B (en) | Dual-modal underwater dam crack detection method based on acoustic-optical image fusion | |
CN118470325A (en) | Steel plate corrosion pixel level positioning and corrosion degree identification method and system | |
CN117333443A (en) | A defect detection method, device, electronic equipment and storage medium | |
CN114463628A (en) | Deep learning remote sensing image ship target identification method based on threshold value constraint | |
CN114065847A (en) | A Fabric Defect Detection Method Based on Improved RefineDet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |