CN112669292B - A method for detecting and classifying surface defects of aircraft skin spray paint - Google Patents
A method for detecting and classifying surface defects of aircraft skin spray paint Download PDFInfo
- Publication number
- CN112669292B CN112669292B CN202011626199.0A CN202011626199A CN112669292B CN 112669292 B CN112669292 B CN 112669292B CN 202011626199 A CN202011626199 A CN 202011626199A CN 112669292 B CN112669292 B CN 112669292B
- Authority
- CN
- China
- Prior art keywords
- inception
- kernels
- layer
- neural network
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007547 defect Effects 0.000 title claims abstract description 98
- 239000003973 paint Substances 0.000 title claims abstract description 32
- 239000007921 spray Substances 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 53
- 238000001514 detection method Methods 0.000 claims abstract description 52
- 238000011176 pooling Methods 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 17
- 230000002950 deficient Effects 0.000 claims description 13
- 238000004519 manufacturing process Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 239000002245 particle Substances 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000009825 accumulation Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 230000003628 erosive effect Effects 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 3
- 238000000576 coating method Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- -1 scratches Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
本发明公开了一种实现飞机蒙皮喷漆表面缺陷检测与分类的方法,包括如下步骤:S1、采集飞机蒙皮喷漆表面的图像;S2、基于表面平滑度对步骤S1采集的图像进行二分类缺陷检测;S3、基于精简GoogLeNet卷积神经网络模型对飞机蒙皮喷漆表面缺陷进行多分类检测;S4、输出多分类结果。本发明提供的实现飞机蒙皮喷漆表面缺陷检测与分类的方法,融合了基于传统图像缺陷检测二分类方法与基于深度学习的GoogLeNet卷积神经网络模型的缺陷多分类框架,二分类与多分类相结合,不仅有效提高了算法的运算效率和鲁棒性,而且有效缩短了算法执行时间,提高了飞机蒙皮表面缺陷检测的精确度,可有效满足对飞机蒙皮喷漆表面缺陷进行智能化检测与分类的需求。
The invention discloses a method for detecting and classifying surface defects of aircraft skin spray paint, comprising the following steps: S1, collecting an image of the aircraft skin spray paint surface; S2, classifying defects on the image collected in step S1 based on the surface smoothness Detection; S3. Multi-classification detection of surface defects of aircraft skin spray paint based on a simplified GoogLeNet convolutional neural network model; S4. Output multi-classification results. The method for detecting and classifying surface defects of aircraft skin spray paint provided by the present invention integrates the traditional image defect detection two-classification method and the deep learning-based GoogLeNet convolutional neural network model of the defect multi-classification framework. Combined, it not only effectively improves the computational efficiency and robustness of the algorithm, but also effectively shortens the execution time of the algorithm, improves the detection accuracy of aircraft skin surface defects, and can effectively meet the requirements of intelligent detection and detection of aircraft skin paint surface defects. Classification needs.
Description
技术领域technical field
本发明是涉及一种实现飞机蒙皮喷漆表面缺陷检测与分类的方法,属于缺陷检测技术领域。The invention relates to a method for realizing detection and classification of surface defects of aircraft skin spray paint, and belongs to the technical field of defect detection.
背景技术Background technique
飞机蒙皮表面涂层工艺复杂,一旦涂层和基体材料产生脱粘、使用过程中产生裂纹,不仅影响外观,还将严重影响飞机的运行安全,因此对飞机蒙皮表面缺陷的检测是检验飞机出厂质量的重要手段。飞机蒙皮表面缺陷的种类繁多,其中较典型的包括颗粒、刮痕、垃圾、流淌、橘皮、翻边超厚等。多数情况下,飞机蒙皮表面缺陷的尺度以毫米计量,缺陷面积小且分散不均。传统的缺陷检测主要依靠人工目视检查,不仅具有人工成本高、工作效率低、实时性差等局限性,且检查效果易受视力、身体状态等人工主观因素的影响。随着飞机生产制造自动化程度的提高,人工目视检测方法已无法满足发展的需求,因此本领域迫切需要对飞机蒙皮表面缺陷的检测方法实现自动智能化。The surface coating process of aircraft skin is complex. Once the coating and the base material are debonded and cracks occur during use, it will not only affect the appearance, but also seriously affect the operation safety of the aircraft. Therefore, the detection of aircraft skin surface defects is to inspect the aircraft. An important means of factory quality. There are many types of surface defects on aircraft skins, among which the more typical ones include particles, scratches, garbage, flow, orange peel, and super-thick flanging. In most cases, the size of surface defects of aircraft skin is measured in millimeters, and the defect area is small and unevenly dispersed. Traditional defect detection mainly relies on manual visual inspection, which not only has limitations such as high labor cost, low work efficiency, and poor real-time performance, but also the inspection effect is easily affected by artificial subjective factors such as vision and physical status. With the improvement of the degree of automation of aircraft manufacturing, the artificial visual detection method can no longer meet the needs of development. Therefore, there is an urgent need in the field to realize automatic and intelligent detection methods for aircraft skin surface defects.
目前国内在飞机蒙皮表面缺陷的自动化检测方面尚处于起步阶段,现有的检测方法往往都存在着计算量大、准确率差和可靠性低等问题,很难满足工业要求,因此,如何缩短算法执行时间,并且提高对飞机蒙皮表面缺陷的检测精确度成为急需解决的技术问题。At present, the automatic detection of aircraft skin surface defects is still in its infancy in China. The existing detection methods often have problems such as large amount of calculation, poor accuracy and low reliability, and it is difficult to meet industrial requirements. Therefore, how to shorten the The algorithm execution time and the improvement of the detection accuracy of aircraft skin surface defects have become technical problems that need to be solved urgently.
发明内容SUMMARY OF THE INVENTION
针对现有技术存在的上述问题,本发明的目的是提供一种实现飞机蒙皮喷漆表面缺陷检测与分类的方法。In view of the above problems existing in the prior art, the purpose of the present invention is to provide a method for detecting and classifying surface defects of aircraft skin spray paint.
为实现上述发明目的,本发明采用的技术方案如下:For realizing the above-mentioned purpose of the invention, the technical scheme adopted in the present invention is as follows:
一种实现飞机蒙皮喷漆表面缺陷检测与分类的方法,包括如下步骤:A method for detecting and classifying surface defects of aircraft skin spray paint, comprising the following steps:
S1、采集飞机蒙皮喷漆表面的图像;S1. Collect an image of the painted surface of the aircraft skin;
S2、基于表面平滑度对步骤S1采集的图像进行二分类缺陷检测,即:先对采集的图像进行预处理;然后对预处理后的图像进行块检测定位,找到带有缺陷的像素;然后使用信息熵函数对带有缺陷的像素与整个图像的像素进行表面平滑度估计,基于表面平滑度判断该图像是否具有缺陷,将图像分为缺陷图像和非缺陷图像,实现对图像的二分类缺陷检测;S2. Perform two-category defect detection on the image collected in step S1 based on the surface smoothness, that is: first preprocess the collected image; then perform block detection and positioning on the preprocessed image to find pixels with defects; then use The information entropy function estimates the surface smoothness of the pixels with defects and the pixels of the whole image, and judges whether the image has defects based on the surface smoothness. ;
S3、基于精简GoogLeNet卷积神经网络模型对飞机蒙皮喷漆表面缺陷进行多分类检测,即:先构建精简GoogLeNet卷积神经网络模型,然后通过构建的精简GoogLeNet卷积神经网络模型对步骤S2中二分类得到的缺陷图像进行多分类检测;所述的GoogLeNet卷积神经网络模型包括五个层次,五个层次分别为第一层、第二层、第三层、第四层、第五层,其中,第一层和第二层是普通卷积层,第三层由Inception-A和Inception-B模块组成,第四层由Inception-A、Inception-B、Inception-C、Inception-D和Inception-E模块组成,第五层由Inception-A和Inception-B模块组成;所述的精简GoogLeNet卷积神经网络模型是指去除Inception-B和Inception-C模块的GoogLeNet卷积神经网络模型;本发明中,采用的精简GoogLeNet卷积神经网络模型在准确率与GoogLeNet卷积神经网络模型相比并未降低的前提下,具有更快速的运行速度,可有效缩短算法执行时间。S3. Perform multi-classification detection on the surface defects of aircraft skin spray paint based on the simplified GoogLeNet convolutional neural network model, namely: first construct the simplified GoogLeNet convolutional neural network model, and then use the constructed simplified GoogLeNet convolutional neural network model to perform the second step in step S2. The defect images obtained by classification are subjected to multi-classification detection; the GoogLeNet convolutional neural network model includes five layers, and the five layers are the first layer, the second layer, the third layer, the fourth layer, and the fifth layer. , the first and second layers are ordinary convolutional layers, the third layer consists of Inception-A and Inception-B modules, and the fourth layer consists of Inception-A, Inception-B, Inception-C, Inception-D and Inception- The E module is composed, and the fifth layer is composed of Inception-A and Inception-B modules; the simplified GoogLeNet convolutional neural network model refers to the GoogLeNet convolutional neural network model that removes the Inception-B and Inception-C modules; in the present invention , the adopted simplified GoogLeNet convolutional neural network model has a faster running speed and can effectively shorten the algorithm execution time under the premise that the accuracy rate is not reduced compared with the GoogLeNet convolutional neural network model.
S4、输出多分类结果。S4, output multi-classification results.
一种实施方案,步骤S2中,对步骤S1采集的图像进行预处理包括:对步骤S1采集的图像进行高斯滤波(以对图像进行降噪)、直方图均衡化(以对图像进行增强)、Ostu自适应阈值分割和腐蚀的预处理。In an embodiment, in step S2, the preprocessing of the image collected in step S1 includes: Gaussian filtering (to denoise the image), histogram equalization (to enhance the image), Preprocessing of Ostu adaptive threshold segmentation and erosion.
一种实施方案,步骤S2中,使用Blob对预处理的图像进行块检测。In one embodiment, in step S2, Blob is used to perform block detection on the preprocessed image.
一种优选方案,步骤S2,基于表面平滑度对步骤S1采集的图像进行二分类缺陷检测的操作具体如下:先对步骤S1采集的图像进行高斯滤波、直方图均衡化、Ostu自适应阈值分割和腐蚀的预处理;然后使用Blob对预处理的图像进行块检测,以找到带有缺陷的像素;然后使用信息熵函数对带有缺陷的像素与整个图像的像素进行表面平滑度估计,设置表面平滑度阈值(该表面平滑度阈值代表的是带有缺陷像素的数量,当缺陷像素达到这个数量就判定为这张图像有缺陷)来判定图像是否具有缺陷,将图像分为缺陷图像和非缺陷图像,实现对图像的二分类缺陷检测。A preferred solution, step S2, the operation of performing two-class defect detection on the image collected in step S1 based on the surface smoothness is as follows: first, the image collected in step S1 is subjected to Gaussian filtering, histogram equalization, Ostu adaptive threshold segmentation and Preprocessing of corrosion; then use Blob to perform block detection on the preprocessed image to find pixels with defects; then use the information entropy function to estimate the surface smoothness of the pixels with defects and the pixels of the whole image, set the surface smoothness The degree threshold (the surface smoothness threshold represents the number of defective pixels, when the number of defective pixels reaches this number, it is determined that the image is defective) to determine whether the image has defects, and the image is divided into defective images and non-defective images. , to achieve binary classification defect detection for images.
一种实施方案,步骤S3中具体包括如下操作:In one embodiment, the following operations are specifically included in step S3:
S31、数据集制作:将采集的飞机蒙皮喷漆表面的图像用于数据集的制作,并将数据集分为训练样本数据集和测试样本数据集;S31. Data set production: use the collected images of the aircraft skin spray paint surface for data set production, and divide the data set into a training sample data set and a test sample data set;
S32、构建GoogLeNet卷积神经网络模型:GoogLeNet卷积神经网络模型包括五个层次,五个层次分别为第一层、第二层、第三层、第四层、第五层,其中,第一层和第二层是普通卷积层,第三层由Inception-A和Inception-B模块组成,第四层由Inception-A、Inception-B、Inception-C、Inception-D和Inception-E模块组成,第五层由Inception-A和Inception-B模块组成;S32. Build the GoogLeNet convolutional neural network model: The GoogLeNet convolutional neural network model includes five layers, and the five layers are the first layer, the second layer, the third layer, the fourth layer, and the fifth layer. Among them, the first layer layer and the second layer are ordinary convolutional layers, the third layer consists of Inception-A and Inception-B modules, and the fourth layer consists of Inception-A, Inception-B, Inception-C, Inception-D and Inception-E modules , the fifth layer consists of Inception-A and Inception-B modules;
S33、构建精简GoogLeNet卷积神经网络模型:去除步骤S32中GoogLeNet卷积神经网络模型的Inception-B和Inception-C模块,构建精简GoogLeNet卷积神经网络模型;S33. Build a simplified GoogLeNet convolutional neural network model: remove the Inception-B and Inception-C modules of the GoogLeNet convolutional neural network model in step S32, and build a simplified GoogLeNet convolutional neural network model;
S34、训练精简GoogLeNet卷积神经网络模型:将训练样本数据集中的图像输入精简GoogLeNet卷积神经网络模型中进行特征识别,获取最优的精简GoogLeNet卷积神经网络模型;S34. Train the simplified GoogLeNet convolutional neural network model: input the images in the training sample data set into the simplified GoogLeNet convolutional neural network model for feature recognition to obtain the optimal simplified GoogLeNet convolutional neural network model;
S35、测试精简GoogLeNet卷积神经网络模型:将测试样本数据集输入已经训练好的精简GoogLeNet卷积神经网络模型中,验证准确率;S35. Test the simplified GoogLeNet convolutional neural network model: input the test sample data set into the trained simplified GoogLeNet convolutional neural network model to verify the accuracy;
S36、多分类检测:将步骤S2中二分类出的缺陷图像输出测试后的精简GoogLeNet卷积神经网络模型中,通过精简GoogLeNet卷积神经网络模型实现对飞机蒙皮喷漆表面缺陷的多分类检测。S36. Multi-classification detection: The defect images classified in step S2 are output to the simplified GoogLeNet convolutional neural network model after the test, and the multi-classification detection of surface defects of aircraft skin spray paint is realized by simplifying the GoogLeNet convolutional neural network model.
一种优选方案,步骤S31中制作的数据集中包括飞机蒙皮喷漆表面各种常见缺陷类型的图像,所述常见缺陷类型包括但不限于气泡、颗粒、垃圾、流淌、翻边。制作数据集的过程中,图像的具体缺陷类型的区分,可以由表面喷漆专业工人进行区分。In a preferred solution, the data set produced in step S31 includes images of various common defect types on the painted surface of the aircraft skin, and the common defect types include but are not limited to bubbles, particles, garbage, flow, and flanging. In the process of making the dataset, the specific defect types of the images can be distinguished by professional surface painting workers.
一种优选方案,步骤S33中构建的精简GoogLeNet卷积神经网络模型依次为第一卷积层、第二卷积层,第三Inception-A模块、第四Inception-A模块、第四Inception-D模块、第四Inception-E模块、第五Inception-A模块;各卷积层和模块的操作如下:A preferred solution, the simplified GoogLeNet convolutional neural network model constructed in step S33 is the first convolutional layer, the second convolutional layer, the third Inception-A module, the fourth Inception-A module, and the fourth Inception-D module, the fourth Inception-E module, the fifth Inception-A module; the operations of each convolutional layer and module are as follows:
第一卷积层首先使用7*7的卷积核,卷积核个数为64,然后使用3*3核进行最大池化,滑动步长均为2;The first convolution layer first uses 7*7 convolution kernels, the number of convolution kernels is 64, and then uses 3*3 kernels for maximum pooling, and the sliding step size is 2;
第二卷积层首先使用3*3的卷积核,卷积核个数为192,滑动步长为1,然后使用3*3核进行最大池化,滑动步长为2;The second convolution layer first uses 3*3 convolution kernels, the number of convolution kernels is 192, and the sliding step size is 1, and then uses 3*3 kernels for maximum pooling, and the sliding step size is 2;
第三Inception-A模块包括四个分支,四个分支具体为:使用1*1的卷积核,卷积核个数为64,滑动步长为1;2)首先使用1*1的卷积核,卷积核个数为96,然后使用3*3的卷积核,卷积核个数为128,滑动步长均为1;3)首先使用1*1的卷积核,卷积核个数为16,然后使用5*5的卷积核,卷积核个数为32,滑动步长均为1;4)首先使用3*3核进行最大池化,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1;最后连结四个分支的输出结果,并继续下一步;The third Inception-A module includes four branches. The four branches are: use a 1*1 convolution kernel, the number of convolution kernels is 64, and the sliding step size is 1; 2) First, use a 1*1 convolution Kernel, the number of convolution kernels is 96, and then a 3*3 convolution kernel is used, the number of convolution kernels is 128, and the sliding step size is 1; 3) First, a 1*1 convolution kernel is used, the convolution kernel The number is 16, and then 5*5 convolution kernels are used, the number of convolution kernels is 32, and the sliding step size is 1; 4) First, use 3*3 kernels for maximum pooling, and then use 1*1 volumes Product kernel, the number of convolution kernels is 32, and the sliding step size is 1; finally connect the output results of the four branches, and continue to the next step;
第四Inception-A模块与第三Inception-A模块相同;The fourth Inception-A module is the same as the third Inception-A module;
第四Inception-D模块与第三Inception-A模块相同;The fourth Inception-D module is the same as the third Inception-A module;
第四Inception-E模块包括五个分支,五个分支具体为:1)使用1*1的卷积核,卷积核个数为64,滑动步长为1;2)首先使用1*1的卷积核,卷积核个数为96,然后使用3*3的卷积核,卷积核个数为128,滑动步长均为1;3)首先使用1*1的卷积核,卷积核个数为16,然后使用5*5的卷积核,卷积核个数为32,滑动步长均为1;4)首先使用3*3核进行最大池化,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1;5)首先使用5*5核进行平均池化,滑动步长为3,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1,然后使用两次全连接层,然后使用Softmax激活函数辅助分类;最后连结前五个分支的输出结果,并继续下一步;The fourth Inception-E module includes five branches. The five branches are as follows: 1) Use a 1*1 convolution kernel, the number of convolution kernels is 64, and the sliding step size is 1; 2) First, use a 1*1 convolution kernel Convolution kernel, the number of convolution kernels is 96, and then a 3*3 convolution kernel is used, the number of convolution kernels is 128, and the sliding step size is 1; 3) First, a 1*1 convolution kernel is used, the volume The number of accumulation kernels is 16, then 5*5 convolution kernels are used, the number of convolution kernels is 32, and the sliding step size is 1; 4) First use 3*3 kernels for maximum pooling, and then use 1*1 The number of convolution kernels is 32, and the sliding step size is 1; 5) First use 5*5 kernels for average pooling, the sliding step size is 3, and then use 1*1 convolution kernels, the volume The number of product kernels is 32, the sliding step size is 1, and then the fully connected layer is used twice, and then the Softmax activation function is used to assist the classification; finally, the output results of the first five branches are connected, and the next step is continued;
第五Inception-A模块与第三Inception-A模块相同;The fifth Inception-A module is the same as the third Inception-A module;
各卷积层和模块中,使用卷积核和池化操作后均使用ReLU激活函数。In each convolutional layer and module, the ReLU activation function is used after using the convolution kernel and pooling operation.
与现有技术相比,本发明具有如下显著性有益效果:Compared with the prior art, the present invention has the following significant beneficial effects:
本发明提供的实现飞机蒙皮喷漆表面缺陷检测与分类的方法,先基于表面平滑度对采集的飞机蒙皮喷漆表面的图像进行二分类缺陷检测,实现了飞机蒙皮喷漆表面缺陷检测中缺陷区域的准确定位,精确的区分飞机蒙皮喷漆表面的缺陷部分与正常部分;然后基于精简GoogLeNet卷积神经网络模型对飞机蒙皮喷漆表面缺陷进行多分类检测,实现了不同缺陷类别的分类识别,整个框架能够解决普通深度学习网络智能检测或识别已知类别的缺陷,对未知缺陷具有较强容忍度,具有较大的灵活性与适应性,准确率高,在满足网络学习性能的同时提高了运算效率,可有效缩短算法执行时间,精确率高;本发明提供的方法可以克服传统人工目视检测的弊端,对于减少飞机生产过程中的原材料浪费、提高防护性能、延长飞机蒙皮使用寿命、减少后期维修成本及保障安全可靠性具有重要意义,也填补了国际大型客机表面喷漆缺陷检测研究的空白,相较于现有技术,取得了显著性进步和出乎意料的效果。The method for detecting and classifying surface defects of aircraft skin spray paint provided by the invention firstly performs two-class defect detection on the collected images of the aircraft skin spray paint surface based on the surface smoothness, so as to realize the defect area in the detection of aircraft skin spray paint surface defects. It can accurately locate and accurately distinguish the defective parts and normal parts of the aircraft skin paint surface; then based on the simplified GoogLeNet convolutional neural network model, the aircraft skin paint surface defects are multi-classified and detected, and the classification and identification of different defect categories are realized. The framework can solve the defects of ordinary deep learning network intelligent detection or identification of known categories, has a strong tolerance for unknown defects, has greater flexibility and adaptability, and has high accuracy. efficiency, can effectively shorten the execution time of the algorithm, and has high accuracy; the method provided by the present invention can overcome the disadvantages of traditional manual visual inspection, and can reduce the waste of raw materials in the aircraft production process, improve the protection performance, prolong the service life of the aircraft skin, reduce the The later maintenance cost and guarantee of safety and reliability are of great significance, and it also fills the gap in the research on the detection of paint defects on the surface of large passenger aircraft. Compared with the existing technology, significant progress and unexpected results have been achieved.
附图说明Description of drawings
图1为本发明实施例采用二分类法检测出的缺陷图像图。FIG. 1 is a diagram of a defect image detected by a binary classification method according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合具体的实施例对本发明技术方案做进一步详细、完整地说明。The technical solution of the present invention will be further described in detail and completely below with reference to specific embodiments.
实施例Example
本发明提供的一种实现飞机蒙皮喷漆表面缺陷检测与分类的方法,包括如下步骤:A method for detecting and classifying surface defects of aircraft skin spray paint provided by the present invention comprises the following steps:
S1、采集飞机蒙皮喷漆表面的图像,例如,采集市面上的C919型号的民用飞机的蒙皮喷漆表面的图像;S1. Collect the image of the painted surface of the skin of the aircraft, for example, collect the image of the painted surface of the skin of the C919 type civil aircraft on the market;
S2、基于表面平滑度对步骤S1采集的图像进行二分类缺陷检测,具体如下:S2. Perform two-class defect detection on the image collected in step S1 based on the surface smoothness, as follows:
先对步骤S1采集的图像进行高斯滤波、直方图均衡化、Ostu自适应阈值分割和腐蚀的预处理;然后使用Blob对预处理的图像进行块检测,以找到带有缺陷的像素;然后使用信息熵函数对带有缺陷的像素与整个图像的像素进行表面平滑度估计,设置表面平滑度阈值(该表面平滑度阈值代表的是带有缺陷像素的数量,当缺陷像素达到这个数量就判定为这张图像有缺陷)来判定图像是否具有缺陷,将图像分为缺陷图像和非缺陷图像,实现对图像的二分类缺陷检测。采用二分类检测出的缺陷图像如图1所示,包括气泡、颗粒、垃圾、流淌、翻边等常见缺陷,本步骤仅判断图1中的各种图像为缺陷图像,至于具体的缺陷类别由后续的多分类进行确认。First, preprocess the image collected in step S1 with Gaussian filtering, histogram equalization, Ostu adaptive threshold segmentation and erosion; then use Blob to perform block detection on the preprocessed image to find pixels with defects; then use the information The entropy function estimates the surface smoothness of the pixels with defects and the pixels of the whole image, and sets the surface smoothness threshold (the surface smoothness threshold represents the number of pixels with defects, when the number of defective pixels reaches this number, it is determined as this. The image is defective) to determine whether the image has defects, and the image is divided into defective images and non-defective images to realize the two-class defect detection of the image. The defect images detected by the two-class classification are shown in Figure 1, including common defects such as bubbles, particles, garbage, flow, and flanging. In this step, only the various images in Figure 1 are judged to be defective images. As for the specific defect types, refer to Subsequent multi-classification is confirmed.
飞机蒙皮喷漆表面缺陷检测的关键与难点之一在于缺陷区域的准确定位,本发明通过Blob对预处理的图像进行块检测,可准确定位图像的缺陷区域。One of the key and difficult points in the detection of surface defects of aircraft skin spray paint lies in the accurate positioning of the defect area. The present invention performs block detection on the preprocessed image through Blob, so that the defect area of the image can be accurately located.
飞机蒙皮喷漆表面缺陷检测的关键与难点之二在于如何将飞机蒙皮喷漆表面缺陷部分与正常部分进行区分,本申请引入基于信息熵的图像表面平滑度评估方法,其中,信息熵可用于衡量图像信息丰富程度,采用基于衡量图像信息丰富程度的信息熵来评估图像的表面平滑度,为衡量飞机蒙皮喷漆表面缺陷提供了标准,从而可以准确将飞机蒙皮喷漆表面缺陷部分与正常部分进行区分;本实施例中,二分类测试准确率为98.3%,准确率高,二分类结果准确。The second key and difficulty in the detection of surface defects of aircraft skin paint is how to distinguish the defect parts of aircraft skin paint from normal parts. This application introduces an image surface smoothness evaluation method based on information entropy, in which information entropy can be used to measure The richness of image information, the information entropy based on the richness of the image information is used to evaluate the surface smoothness of the image, which provides a standard for measuring the surface defects of the aircraft skin paint, so that the defect parts of the aircraft skin paint can be accurately compared with the normal parts. Distinguishing: In this embodiment, the accuracy rate of the two-category test is 98.3%, the accuracy is high, and the result of the two-category test is accurate.
S3、基于精简GoogLeNet卷积神经网络模型对飞机蒙皮喷漆表面缺陷进行多分类检测:具体包括如下步骤:S3. Perform multi-classification detection on surface defects of aircraft skin spray paint based on the simplified GoogLeNet convolutional neural network model: the specific steps include the following:
S31、数据集制作:将采集的飞机蒙皮喷漆表面的图像用于数据集的制作,并将数据集分为训练样本数据集和测试样本数据集;数据集的制作过程中,可先由表面喷漆专业工人对采集的图像进行缺陷类型的区分,缺陷类型包括气泡、颗粒、垃圾、流淌、翻边等常见的缺陷类型,然后将已经分好的图像分别分为训练样本数据集和测试样本数据集;S31. Data set production: use the collected images of the painted surface of the aircraft skin for the production of the data set, and divide the data set into a training sample data set and a test sample data set; during the production of the data set, the surface Professional painting workers distinguish the types of defects in the collected images. The types of defects include common defect types such as bubbles, particles, garbage, flow, and flanging, and then the divided images are divided into training sample data sets and test sample data. set;
S32、构建GoogLeNet卷积神经网络模型:GoogLeNet卷积神经网络模型包括五个层次,五个层次分别为第一层、第二层、第三层、第四层、第五层,其中,第一层和第二层是普通卷积层,第三层由Inception-A和Inception-B模块组成,第四层由Inception-A、Inception-B、Inception-C、Inception-D和Inception-E模块组成,第五层由Inception-A和Inception-B模块组成;S32. Build the GoogLeNet convolutional neural network model: The GoogLeNet convolutional neural network model includes five layers, and the five layers are the first layer, the second layer, the third layer, the fourth layer, and the fifth layer. Among them, the first layer layer and the second layer are ordinary convolutional layers, the third layer consists of Inception-A and Inception-B modules, and the fourth layer consists of Inception-A, Inception-B, Inception-C, Inception-D and Inception-E modules , the fifth layer consists of Inception-A and Inception-B modules;
S33、构建精简GoogLeNet卷积神经网络模型:去除步骤S32中GoogLeNet卷积神经网络模型的Inception-B和Inception-C模块,构建精简GoogLeNet卷积神经网络模型;S33. Build a simplified GoogLeNet convolutional neural network model: remove the Inception-B and Inception-C modules of the GoogLeNet convolutional neural network model in step S32, and build a simplified GoogLeNet convolutional neural network model;
因此,本实施例中构建的精简GoogLeNet卷积神经网络模型即依次为:第一卷积层、第二卷积层,第三Inception-A模块、第四Inception-A模块、第四Inception-D模块、第四Inception-E模块、第五Inception-A模块;各卷积层和模块的操作如下:Therefore, the simplified GoogLeNet convolutional neural network model constructed in this embodiment is: the first convolutional layer, the second convolutional layer, the third Inception-A module, the fourth Inception-A module, and the fourth Inception-D module, the fourth Inception-E module, the fifth Inception-A module; the operations of each convolutional layer and module are as follows:
第一卷积层首先使用7*7的卷积核,卷积核个数为64,然后使用3*3核进行最大池化,滑动步长均为2;The first convolution layer first uses 7*7 convolution kernels, the number of convolution kernels is 64, and then uses 3*3 kernels for maximum pooling, and the sliding step size is 2;
第二卷积层首先使用3*3的卷积核,卷积核个数为192,滑动步长为1,然后使用3*3核进行最大池化,滑动步长为2;The second convolution layer first uses 3*3 convolution kernels, the number of convolution kernels is 192, and the sliding step size is 1, and then uses 3*3 kernels for maximum pooling, and the sliding step size is 2;
第三Inception-A模块包括四个分支,四个分支具体为:使用1*1的卷积核,卷积核个数为64,滑动步长为1;2)首先使用1*1的卷积核,卷积核个数为96,然后使用3*3的卷积核,卷积核个数为128,滑动步长均为1;3)首先使用1*1的卷积核,卷积核个数为16,然后使用5*5的卷积核,卷积核个数为32,滑动步长均为1;4)首先使用3*3核进行最大池化,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1;最后连结四个分支的输出结果,并继续下一步;The third Inception-A module includes four branches. The four branches are: use a 1*1 convolution kernel, the number of convolution kernels is 64, and the sliding step size is 1; 2) First, use a 1*1 convolution Kernel, the number of convolution kernels is 96, and then a 3*3 convolution kernel is used, the number of convolution kernels is 128, and the sliding step size is 1; 3) First, a 1*1 convolution kernel is used, the convolution kernel The number is 16, and then 5*5 convolution kernels are used, the number of convolution kernels is 32, and the sliding step size is 1; 4) First, use 3*3 kernels for maximum pooling, and then use 1*1 volumes Product kernel, the number of convolution kernels is 32, and the sliding step size is 1; finally connect the output results of the four branches, and continue to the next step;
第四Inception-A模块与第三Inception-A模块相同;The fourth Inception-A module is the same as the third Inception-A module;
第四Inception-D模块与第三Inception-A模块相同;The fourth Inception-D module is the same as the third Inception-A module;
第四Inception-E模块包括五个分支,五个分支具体为:1)使用1*1的卷积核,卷积核个数为64,滑动步长为1;2)首先使用1*1的卷积核,卷积核个数为96,然后使用3*3的卷积核,卷积核个数为128,滑动步长均为1;3)首先使用1*1的卷积核,卷积核个数为16,然后使用5*5的卷积核,卷积核个数为32,滑动步长均为1;4)首先使用3*3核进行最大池化,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1;5)首先使用5*5核进行平均池化,滑动步长为3,然后使用1*1的卷积核,卷积核个数为32,滑动步长均为1,然后使用两次全连接层,然后使用Softmax激活函数辅助分类;最后连结前五个分支的输出结果,并继续下一步;The fourth Inception-E module includes five branches. The five branches are as follows: 1) Use a 1*1 convolution kernel, the number of convolution kernels is 64, and the sliding step size is 1; 2) First, use a 1*1 convolution kernel Convolution kernel, the number of convolution kernels is 96, and then a 3*3 convolution kernel is used, the number of convolution kernels is 128, and the sliding step size is 1; 3) First, a 1*1 convolution kernel is used, the volume The number of accumulation kernels is 16, then 5*5 convolution kernels are used, the number of convolution kernels is 32, and the sliding step size is 1; 4) First use 3*3 kernels for maximum pooling, and then use 1*1 The number of convolution kernels is 32, and the sliding step size is 1; 5) First use 5*5 kernels for average pooling, the sliding step size is 3, and then use 1*1 convolution kernels, the volume The number of product kernels is 32, the sliding step size is 1, and then the fully connected layer is used twice, and then the Softmax activation function is used to assist the classification; finally, the output results of the first five branches are connected, and the next step is continued;
第五Inception-A模块与第三Inception-A模块相同;The fifth Inception-A module is the same as the third Inception-A module;
各卷积层和模块中,使用卷积核和池化操作后均使用ReLU激活函数;In each convolutional layer and module, the ReLU activation function is used after using the convolution kernel and pooling operation;
S34、训练精简GoogLeNet卷积神经网络模型:将训练样本数据集中的图像输入精简GoogLeNet卷积神经网络模型中进行特征识别,获取最优的精简GoogLeNet卷积神经网络模型;S34. Train the simplified GoogLeNet convolutional neural network model: input the images in the training sample data set into the simplified GoogLeNet convolutional neural network model for feature recognition to obtain the optimal simplified GoogLeNet convolutional neural network model;
S35、测试精简GoogLeNet卷积神经网络模型:将测试样本数据集输入已经训练好的精简GoogLeNet卷积神经网络模型中,验证准确率;S35. Test the simplified GoogLeNet convolutional neural network model: input the test sample data set into the trained simplified GoogLeNet convolutional neural network model to verify the accuracy;
S36、多分类检测:将步骤S2中二分类出的缺陷图像输出测试后的精简GoogLeNet卷积神经网络模型中,通过精简GoogLeNet卷积神经网络模型实现对飞机蒙皮喷漆表面缺陷的多分类检测,通过该步骤,可以进一步判断出步骤S2中二分类的缺陷图像具体的缺陷类型,例如,缺陷图像是气泡缺陷还是颗粒、垃圾、流淌或翻边缺陷。S36. Multi-classification detection: output the defect images classified in step S2 into the simplified GoogLeNet convolutional neural network model after the test, and realize multi-classification detection of the surface defects of aircraft skin spray paint by simplifying the GoogLeNet convolutional neural network model. Through this step, it is possible to further determine the specific defect type of the second-classified defect image in step S2, for example, whether the defect image is a bubble defect or a particle, garbage, flow or flanging defect.
飞机蒙皮喷漆表面缺陷检测的关键与难点之三在于不同缺陷类别的分类识别,本发明采用基于Inception-V4的精简GoogLeNet网络不仅实现了不同缺陷类别的分类识别,整个框架能够解决普通深度学习网络智能检测或识别已知类别的缺陷,对未知缺陷具有较强容忍度,具有较大的灵活性与适应性,准确率高,在满足网络学习性能的同时提高了运算效率,有效缩短算法执行时间。本实施例中,多分类测试准确率为99.7%,准确率高,多分类结果准确。The third key and difficulty in the detection of surface defects of aircraft skin spray paint is the classification and identification of different defect categories. The present invention adopts the simplified GoogLeNet network based on Inception-V4 to realize the classification and identification of different defect categories, and the whole framework can solve the problem of ordinary deep learning network. Intelligent detection or identification of known types of defects, strong tolerance for unknown defects, greater flexibility and adaptability, high accuracy, improved computing efficiency while satisfying network learning performance, and effectively shortened algorithm execution time . In this embodiment, the multi-classification test accuracy rate is 99.7%, the accuracy rate is high, and the multi-classification result is accurate.
S4、输出多分类结果,完成飞机蒙皮喷漆表面的缺陷检测与分类。S4. Output multi-classification results to complete defect detection and classification on the painted surface of aircraft skin.
本发明提供的实现飞机蒙皮喷漆表面缺陷检测与分类的方法融合了基于传统图像缺陷检测二分类方法与基于深度学习的精简GoogLeNet卷积神经网络模型的缺陷多分类框架,分类结果如附图1所示,其中分类结果有气泡、翻边、颗粒、流淌和垃圾五种缺陷类型,将二分类与多分类相结合,有效提高了算法的运算效率和鲁棒(Robust)性,进而有效缩短算法执行时间,提高了飞机蒙皮表面缺陷检测的精确度,可有效满足飞机蒙皮喷漆表面缺陷智能化检测与分类需求,对于减少飞机生产过程中的原材料浪费、提高防护性能、延长飞机蒙皮使用寿命、减少后期维修成本及保障安全可靠性具有重要意义。The method for detecting and classifying surface defects of aircraft skin spray paint provided by the present invention integrates the two-classification method based on traditional image defect detection and the multi-class defect classification framework based on the simplified GoogLeNet convolutional neural network model based on deep learning. The classification results are shown in Figure 1. As shown in the figure, the classification results include five defect types: bubbles, flanging, particles, flowing and garbage. The combination of two-classification and multi-classification effectively improves the computational efficiency and robustness of the algorithm, thereby effectively shortening the algorithm. The execution time improves the accuracy of aircraft skin surface defect detection, and can effectively meet the needs of intelligent detection and classification of aircraft skin paint surface defects. It is of great significance to prolong the service life, reduce maintenance costs in the later period and ensure safety and reliability.
最后需要在此指出的是:以上仅是本发明的部分优选实施例,不能理解为对本发明保护范围的限制,本领域的技术人员根据本发明的上述内容做出的一些非本质的改进和调整均属于本发明的保护范围。Finally, it should be pointed out here that the above are only some preferred embodiments of the present invention, and should not be construed as limiting the protection scope of the present invention, and some non-essential improvements and adjustments made by those skilled in the art according to the above-mentioned content of the present invention All belong to the protection scope of the present invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011626199.0A CN112669292B (en) | 2020-12-31 | 2020-12-31 | A method for detecting and classifying surface defects of aircraft skin spray paint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011626199.0A CN112669292B (en) | 2020-12-31 | 2020-12-31 | A method for detecting and classifying surface defects of aircraft skin spray paint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112669292A CN112669292A (en) | 2021-04-16 |
CN112669292B true CN112669292B (en) | 2022-09-30 |
Family
ID=75412543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011626199.0A Active CN112669292B (en) | 2020-12-31 | 2020-12-31 | A method for detecting and classifying surface defects of aircraft skin spray paint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112669292B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870199B (en) * | 2021-09-16 | 2025-02-11 | 江苏航空职业技术学院 | A recognition method for aircraft skin defect detection |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570516A (en) * | 2016-09-06 | 2017-04-19 | 国网重庆市电力公司电力科学研究院 | Obstacle recognition method using convolution neural network |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719273A (en) * | 2009-10-21 | 2010-06-02 | 苏州有色金属研究院有限公司 | On-line self-adaptation extraction method of metallurgy strip surface defect based on one-dimension information entropy |
CN108682003B (en) * | 2018-04-04 | 2021-10-08 | 睿视智觉(厦门)科技有限公司 | Product quality detection method |
CN109886925A (en) * | 2019-01-19 | 2019-06-14 | 天津大学 | An aluminum surface defect detection method combining active learning and deep learning |
CN111582257A (en) * | 2019-02-15 | 2020-08-25 | 波音公司 | Method, device and system for detecting object to be detected |
CN111415325B (en) * | 2019-11-11 | 2023-04-25 | 杭州电子科技大学 | A Copper Foil Substrate Defect Detection Method Based on Convolutional Neural Network |
CN111340754B (en) * | 2020-01-18 | 2023-08-25 | 中国人民解放军国防科技大学 | A Method Based on Detection and Classification of Aircraft Skin Surface Defects |
CN111798419A (en) * | 2020-06-27 | 2020-10-20 | 上海工程技术大学 | A kind of metal spray paint surface defect detection method |
-
2020
- 2020-12-31 CN CN202011626199.0A patent/CN112669292B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570516A (en) * | 2016-09-06 | 2017-04-19 | 国网重庆市电力公司电力科学研究院 | Obstacle recognition method using convolution neural network |
Also Published As
Publication number | Publication date |
---|---|
CN112669292A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108074231B (en) | A method for surface defect detection of magnetic sheet based on convolutional neural network | |
CN109239102B (en) | CNN-based flexible circuit board appearance defect detection method | |
CN109934811B (en) | Optical element surface defect detection method based on deep learning | |
CN111798419A (en) | A kind of metal spray paint surface defect detection method | |
CN105118044B (en) | A kind of wheel shape cast article defect automatic testing method | |
CN111402203A (en) | Fabric surface defect detection method based on convolutional neural network | |
CN104766097B (en) | Surface of aluminum plate defect classification method based on BP neural network and SVMs | |
CN111852792B (en) | Fan blade defect self-diagnosis positioning method based on machine vision | |
CN106683099A (en) | Product surface defect detection method | |
CN103778624A (en) | Fabric defect detection method based on optical threshold segmentation | |
CN110261391A (en) | A kind of LED chip appearance detection system and method | |
CN110349125A (en) | A kind of LED chip open defect detection method and system based on machine vision | |
CN102680494B (en) | Based on arcuation face, the polishing metal flaw real-time detection method of machine vision | |
CN112669292B (en) | A method for detecting and classifying surface defects of aircraft skin spray paint | |
CN112529884A (en) | Welding spot quality evaluation method based on indentation characteristic image recognition | |
CN110175614A (en) | A kind of detection method of printed circuit board via hole inner wall quality | |
CN117974629B (en) | Online defect detection method and device for copper-clad plate production line, storage medium and product | |
CN114677362A (en) | Surface defect detection method based on improved YOLOv5 | |
CN113592853A (en) | Method for detecting surface defects of protofilament fibers based on deep learning | |
CN103679183A (en) | Defect identification method for plain weave gray cloth | |
CN116777836A (en) | A multimodal data-driven quality inspection method for injection molding process products | |
CN115423741A (en) | Structural defect detection method based on deep learning | |
CN115866502A (en) | Microphone part surface defect online detection process | |
CN102313740B (en) | Solar panel crack detection method | |
CN114596296A (en) | A high-sensitivity system and method for identifying end face defects of hot-rolled steel coils |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |