WO2020119103A1 - 基于深度学习的航空发动机孔探图像损伤智能识别方法 - Google Patents

基于深度学习的航空发动机孔探图像损伤智能识别方法 Download PDF

Info

Publication number
WO2020119103A1
WO2020119103A1 PCT/CN2019/095290 CN2019095290W WO2020119103A1 WO 2020119103 A1 WO2020119103 A1 WO 2020119103A1 CN 2019095290 W CN2019095290 W CN 2019095290W WO 2020119103 A1 WO2020119103 A1 WO 2020119103A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
image
neural network
convolutional neural
aeroengine
Prior art date
Application number
PCT/CN2019/095290
Other languages
English (en)
French (fr)
Inventor
程琳
Original Assignee
程琳
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 程琳 filed Critical 程琳
Publication of WO2020119103A1 publication Critical patent/WO2020119103A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention belongs to the technical field of aeroengine damage identification, and particularly relates to an intelligent identification method of aeroengine borehole image damage based on deep learning.
  • the engine as a core component in an aircraft has an important impact on flight safety.
  • the internal temperature is high and the pressure is strong, so various damages such as cracks and burn-through often occur in the internal structure of the engine. If these injuries cannot be discovered in time, it will pose a major threat to the safety of civil aviation flight. Therefore, civil aviation companies use a variety of detection methods to timely discover hidden safety hazards of engine structures.
  • Engine drilling technology is one of the important detection methods. Drilling technicians use a drilling camera to extend into the engine, take photos and videos inside the engine, and look for cracks, burn-throughs and other damage in the corresponding photos and videos, and finally form a drilling report for further repair and maintenance. Provide guidance for work.
  • the drilling technology is often time-consuming and labor-intensive, and the drilling of an engine often takes tens of hours. And subject to the subjective factors of the burrowing personnel, its accuracy is limited. With the development of my country's economy and the acceleration of urbanization, domestic and foreign routes have experienced rapid growth in recent years. Due to the limited efficiency and precision, and the high labor cost, the traditional drilling technology cannot meet the current rising demand for engine drilling.
  • the present invention provides a deep learning-based intelligent recognition method for aeroengine pitting image damage, which includes: obtaining network weights of a fully convolutional neural network that meets a preset accuracy requirement on a test set ,
  • the test set is a plurality of aeroengine burrowing mark images
  • the aeroengine burring mark image is an aeroengine bursting image marked by a tester with a damage area and a damage category corresponding to the damage area; loading the Network weights to initialize the fully convolutional neural network; obtaining aeroengine grouting images; preprocessing the aeroengine grouting images to obtain preprocessed images that meet the input requirements of the fully convolutional neural network; after initialization
  • the fully convolutional neural network processes the pre-processed image to obtain the damage region and the damage category corresponding to the damage region of the aeroengine borehole image.
  • the pre-processed image is processed using the initialized full convolutional neural network to obtain the damage area and the damage area of the aeroengine borehole image
  • Corresponding damage categories specifically include: using the convolution structure of the fully convolutional neural network after initialization to perform feature extraction on the preprocessed image to obtain image feature tensors; using the fully convolutional neural network after initialization
  • the deconvolution structure performs the dimension-up processing on the image feature tensor to obtain the probability that each pixel in the aeroengine borehole image is respectively various damage categories; according to the probability that each pixel is respectively various damage categories
  • the damage category of each pixel; according to the damage category of each pixel, the damage region and the damage category corresponding to the damage region of the aeroengine borehole image are obtained.
  • the acquiring the network weight of the fully convolutional neural network that meets the preset accuracy requirement on the test set specifically includes: acquiring a plurality of aeroengine scoring images; combining multiple The aeroengine scoring image is divided into a test set and a training set in proportion, and the aeroengine scoring image in the training set is preprocessed; a fully convolutional neural network is constructed and initialized; pre-processed training is used Set to train the fully convolutional neural network after initialization to obtain the weights of the trained network; use the test set to verify whether the fully convolutional neural network updated with the trained network weights is valid, and if the verification is valid, then The trained network weight is used as the network weight of the fully convolutional neural network that meets the preset accuracy requirement on the test set.
  • the method further includes: Performing data enhancement processing on the engine scoring image to obtain an aeroengine scoring enhanced image; correspondingly, the images in the training set include: an aeroengine scoring image and an aeroengine hole corresponding to the aeroengine scoring image Explore enhanced images.
  • the constructing and initializing the fully convolutional neural network specifically includes: constructing a convolutional structure in the fully convolutional neural network, and the convolutional structure is used to Performing feature extraction on the aeroengine scoring image to obtain image feature tensor; constructing a deconvolution structure in the fully convolutional neural network, the deconvolution structure is used to perform the received image feature tensor Dimension-up processing to obtain the probability that each pixel in the aeroengine scoring image is a variety of damage categories; the pre-training weights are used to initialize the convolution structure, and the pre-training weights are determined by the convolution structure Obtained by training on the public image data set; initialize the deconvolution structure.
  • the convolution structure includes a plurality of convolution blocks, and each of the convolution blocks includes a convolution layer and a pooling layer equipped with a first activation function; the deconvolution The product structure includes a deconvolution layer and a convolution layer with a second activation function; the first activation function and the second activation function are different activation functions.
  • the convolution structure includes: 5 convolutional blocks, each of which is two consecutive convolutional layers with relu activation functions plus a pooling layer
  • the deconvolution structure includes: a deconvolution layer and a convolution layer with a sigmoid activation function.
  • the preprocessing of the aero-engine prospecting image specifically includes: scaling the size of the aero-engine prospecting image to conform to the size input of the fully convolutional neural network Requirement; normalize the scaled image so that the mean value of all pixels of the scaled image becomes 0 and the variance becomes 1.
  • the pre-trained training set is used to train the initialized fully convolutional neural network to obtain the network weight after training, which specifically includes: dividing the training set into multiple batches, Each batch contains N images of the aeroengine scoring marks; repeat the training steps on the fully convolutional neural network after initialization to traverse multiple batches, until the value of the objective function meets the preset conditions, and will meet the preset conditions
  • the network weight corresponding to the value of the objective function of is used as the network weight after training;
  • the training step specifically includes: predicting the probability that each pixel in each of the aeroengine scoring images in each batch is a different damage category; according to Each pixel is the probability of different damage categories to obtain the predicted damage category of each pixel; obtain the value representing the gap between the predicted damage category of each pixel and the damage category marked by the tester; mark all the aeroengine sounding marks in each batch The average value of the difference between the damage category predicted by all pixels in the image and the damage category marked by the experimenter as the objective function; based on the back propagation method
  • the test set is used to verify whether the fully-convolutional neural network updated with the trained network weights is valid. If the verification is valid, the trained network weights are used as
  • the fully convolutional neural network is used to intelligently identify the damaged areas in the aeroengine drilling image, which effectively improves the efficiency and accuracy of the existing human-based recognition method; in the drilling process, it can not only help the drilling personnel
  • the role of locating damage and improving the efficiency of the drilling process can also help the drilling personnel to find some damage that is difficult to find or often overlooked by humans (that is, it can help manually identify the undetected damage area), which can further improve the accuracy of the drilling process It reduces the influence of subjective factors in the process of drilling. It can work efficiently for a long time, reduce the consumption of manpower, reduce the probability of misjudgment and missed damage of the staff under fatigue work, and improve the recognition accuracy.
  • FIG. 1 is a schematic flowchart of a deep learning-based intelligent identification method for aero-engine borehole image damage according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of another method for intelligently recognizing aeroengine borehole image damage based on deep learning provided by an embodiment of the present invention.
  • damage categories include both damage categories, such as cracks and burn-through; and include no damage categories, that is, no damage.
  • an embodiment of the present invention provides a deep learning-based intelligent recognition method for aeroengine borehole image damage, which includes the following steps:
  • Step 101 Obtain the network weights of a fully convolutional neural network that meets the preset accuracy requirements on the test set, where the test set is a plurality of aeroengine scoring images, and the aeroengine scoring images are marked by the tester The damage area and the aeroengine borehole image corresponding to the damage area.
  • Step 102 Load network weights to initialize a fully convolutional neural network.
  • Step 103 Acquire the aerial engine borehole image.
  • Step 104 Pre-process the aero-engine borehole image to obtain a pre-processed image that meets the input requirements of the fully convolutional neural network.
  • Step 105 Use the initialized fully convolutional neural network to process the pre-processed image to obtain the damage area and the damage category corresponding to the damage area of the aeroengine borehole image.
  • This embodiment uses a fully convolutional neural network to intelligently identify the damage area and the damage category corresponding to the damage area in the aeroengine borehole image, which effectively improves the work efficiency and accuracy of the existing human-based recognition method.
  • it can not only help the drilling personnel locate damage and improve the efficiency of drilling, but also help the drilling personnel find some damage that is difficult to find or often overlooked by the labor (that is, it can help the human to identify the undetected Damage area), which can further improve the accuracy of the drilling process and reduce the influence of subjective factors in the drilling process; it can work efficiently for a long time, reduce the consumption of manpower, and reduce the misjudgment of the staff under fatigue work, The probability of missing judgment damage improves the recognition accuracy.
  • FIG. 2 another embodiment of the present invention provides a deep learning-based intelligent recognition method for aeroengine pitting image damage, which includes the following steps:
  • Step 201 Obtain a test set and a training set, preprocess the images in the training set, and obtain images that meet the input requirements of the fully convolutional neural network.
  • a plurality of aeroengine grouting images are obtained to mark the damage area and its corresponding damage category.
  • the marked image is called an aeroengine grouting mark image.
  • the tester obtains multiple aeroengine borehole images through on-site shooting or collecting historical images, and uses geometric shapes in each image to mark the damage area and its corresponding damage category.
  • the geometric shape can be polygonal or Other shapes.
  • the tester or the professional technician of the burrowing technology or the expert in the field of damage recognition of the aeroengine burrowing image marks the vertices of the polygon one by one, and connects the vertices in order to obtain the damaged area of the polygon.
  • the marked damage area may be cracked or burned through.
  • a plurality of aeroengine scoring images are divided into a test set and a training set in proportion, that is, a part of a plurality of aeroengine scoring images form a test set, and another part of a plurality of aeroengine scoring images form a training
  • the ratio may be 80% and 20%, or other ratios, which is not limited in this embodiment.
  • each image in the training set pre-process each image in the training set to obtain an image that meets the input requirements of the following fully convolutional neural network, which includes: First, the size of the aeroengine scoring marker image is scaled to meet the input of the size of the fully convolutional neural network Claim.
  • the input requirement of the full convolutional neural network to the image size is a multiple of 32 for example, for example: the size expressed in height ⁇ width is 576 ⁇ 768.
  • the scaled image is then normalized to change the mean of all pixels of the scaled image to 0 and the variance to 1.
  • the method further includes: performing data enhancement processing on each aeroengine scoring image in the training set to obtain an aeroengine scoring enhanced image.
  • the training set includes: The aero-engine pitting mark image and the aero-engine pitting enhanced image corresponding to the aero-engine pitting mark image.
  • the data enhancement processing method may be: flipping the aero-engine mark image, for example: flipping in the horizontal direction and/or flipping in the vertical direction and/or flipping in the horizontal and vertical directions. Flip vertically.
  • rotation processing may also be used, which is not limited in this embodiment.
  • Step 202 Construct and initialize a fully convolutional neural network.
  • a convolutional structure in a fully convolutional neural network is constructed.
  • the convolutional structure is used to extract features from aeroengine scoring images to obtain image feature tensors.
  • the convolution structure includes: multiple convolution blocks, and each convolution block includes: a convolution layer and a pooling layer equipped with the first activation function.
  • the convolution structure is described below by taking the number of convolution blocks as 5 and the number of convolution layers as 2, for example.
  • the convolution structure includes: 5 convolution blocks, and each convolution block is a conv+relu+conv+relu+pooling structure, that is, a structure of two consecutive convolution layers with a relu activation function plus a pooling layer.
  • the core size and step size of the convolutional layer are 3 ⁇ 3 and 1, respectively, and the core size and step size of the pooling layer are 2 ⁇ 2 and 2, respectively.
  • the first activation function is the relu activation function.
  • Five convolutional blocks reduce the input image size by 32 times. It should be noted that the specific structure of the convolution structure can be adjusted according to the actual situation. This embodiment does not limit the number of convolution blocks or the specific structure of the convolution block.
  • the deconvolution structure in the fully convolutional neural network is built.
  • the deconvolution structure is used to perform dimension-up processing on the image feature tensor to obtain the probability that each pixel in the aeroengine scoring image is a variety of damage categories.
  • the deconvolution structure includes: a deconvolution layer and a convolution layer with a second activation function; the second activation function and the first activation function are different activation functions.
  • the deconvolution structure will be described below by taking the number of convolution layers as 1 as an example.
  • the deconvolution structure includes: a deconvolution layer and a convolution layer with a sigmoid activation function.
  • the kernel size and step size of the deconvolution layer are 64 ⁇ 64 and 32, the kernel size and step size of the convolution layer are 1 ⁇ 1 and 1, respectively, and the second activation function is the sigmoid activation function.
  • the deconvolution structure increases the input image size by 32 times. It should be noted that the specific structure of the deconvolution structure can be adjusted according to the actual situation. This embodiment does not limit the number of convolution blocks or the specific structure of the convolution block.
  • initializing the fully convolutional neural network includes: initializing the convolutional structure of the fully convolutional neural network and initializing the deconvolutional structure of the fully convolutional neural network.
  • Initializing the convolution structure can be achieved by using random noise to initialize the weights of the convolution structure.
  • Initializing the deconvolution layer of the deconvolution structure can be achieved by using the bilinear interpolation transformation matrix to initialize the weights of the deconvolution layer.
  • Initialization The convolution layer of the deconvolution structure can be realized by using random noise to initially change the weight of the convolution layer. Random noise can be normally distributed random noise.
  • the weights of the convolutional structure are initialized using pre-trained weights.
  • the pre-training weights are trained by the convolution structure on the public image data set.
  • the public image data set may be, for example, ImageNet image data, which is a large-scale image data set used for research of visual object recognition algorithms.
  • Step 203 Use the pre-processed training set to train the initialized fully convolutional neural network to obtain the trained network weights.
  • all the aero-engine marked images in the pre-processed training set are divided into multiple batches, each containing N images, that is, the pre-processed training set is divided into batches of size N, where N is greater than or equal to Natural number of 1.
  • N is greater than or equal to Natural number of 1.
  • a training step is performed, which includes: predicting the probability that each pixel in each aeroengine scoring image in a batch is a different damage category.
  • the damage category of each pixel is obtained according to the probability that each pixel is a different damage category.
  • the damage category obtained at this time is the predicted damage category. For example, the damage category with the highest probability is selected as the damage category of the pixel.
  • a value indicating the difference between the predicted damage category and the marked damage category of each pixel is obtained. In application, this value can be calculated by cross entropy, and the marked damage category is the damage category marked by the experimenter.
  • the average value of all the pixels in the aero-engine sounding marker images in this batch representing the value of the difference between the predicted damage category and the marked damage category is taken as the objective function.
  • the objective function at this time can be Call it the cross-entropy function.
  • the gradient of each weight change in the full convolutional neural network is calculated according to the objective function, that is, based on the objective function, the back propagation method is used to calculate the gradient value of each weight change in the full convolutional neural network, and use
  • the optimization method updates (ie, modifies or adjusts) the weight values in the fully convolutional neural network according to the calculated gradient values.
  • the optimization method is an optimization method in machine learning, which may be a stochastic gradient descent method or an RMSPROP method or an ADAM method.
  • the objective function L can be expressed as:
  • N is the number of images in each batch
  • H is the height of the image
  • W is the width of the image
  • C is the number of channels of the image
  • the above training steps are sequentially performed on one of the multiple batches until the value of the objective function meets the preset condition, and the network weight corresponding to the value of the objective function that meets the preset condition is used as the trained network weight.
  • the preset condition may be that the value of the objective function does not decrease or the value of the objective function is less than the preset objective function value, such as 10 -5 , 10 -6 .
  • Step 204 Use the training set to verify whether the fully convolutional neural network updated with the trained network weights is valid. If the verification is valid, the trained network weights are used as the fully convolutional nerves that meet the preset accuracy requirements on the test set The network weight of the network.
  • test set and preset evaluation indexes are used to verify whether the fully convolutional neural network updated with the trained network weights is effective.
  • the evaluation index is used to verify whether the full convolutional neural network is effective. It can be the pixel accuracy rate PA and the average coincidence rate mIOU described below. It can also use other evaluation indicators to measure the performance of the full convolutional neural network according to different application needs. Validity, such as the calculated precision or recall based on the prediction results of each pixel.
  • the verification process will be described in detail by taking the pixel accuracy rate PA and the average coincidence rate mIOU as examples.
  • Pixel accuracy PA Panel Accuracy
  • mIOU mean Intersection over Union
  • n ab is the number of pixels of the damage category a predicted by the fully convolutional neural network as the damage category b
  • t i is the pixel marked by the experimenter as the damage category i
  • n cl is the number of damage types included in the label category
  • ⁇ j n ji represents the number of all pixels predicted as the i-th damage category
  • each aero-engine scoring image in the test set pre-processed image that meets the input requirements of the full convolutional neural network, and use the trained network weights to update the network weights.
  • the prediction is used to obtain the probability that each pixel in the aeroengine scoring image is a variety of damage categories, and then the damage category predicted by each pixel is obtained.
  • this step please refer to the relevant content of the above steps 201 to 203, which will not be repeated here.
  • the damage category and predicted damage category of each pixel in the aeroengine scoring image calculate the PA and mIOU of the aeroengine scoring image, and determine whether the average PA and mIOU of all images in the test set meet their respective The preset threshold requirement, which is usually set by the experimenter; if judged to be in compliance, the weight of the fully convolutional neural network network at this time is used as the fully convolutional neural network that meets the preset accuracy requirements on the test set
  • the weight of the network is that the fully convolutional neural network is available.
  • Hyperparameters include batch size, selection of optimization method, and parameters corresponding to optimization method. If N is adjusted from 2 to 3, the optimization method is adjusted from the stochastic gradient descent method to the ADAM method, and the parameters corresponding to the optimization method are modified accordingly.
  • Step 205 Load the network weights to initialize the fully convolutional neural network.
  • Step 206 Obtain the aeroengine grouting image, and preprocess the aeroengine grouting image to obtain a preprocessed image that meets the input requirements of the fully convolutional neural network.
  • this step please refer to the relevant content of the above steps 201 to 203, which will not be repeated here.
  • Step 207 Use the initialized fully convolutional neural network to process the preprocessed image to obtain the damage area and the damage category corresponding to the damage area of the aeroengine borehole image.
  • the convolution structure of the initialized full convolutional neural network uses the convolution structure of the initialized full convolutional neural network to extract the feature features of the pre-processed image to obtain the image feature tensor; use the deconvolution structure of the initialized full convolutional neural network to the image feature tensor
  • the dimension-up processing is performed to obtain the probability that each pixel in the aeroengine borehole image is a variety of damage categories; according to the probability that each pixel is a variety of damage categories, the damage category of each pixel is obtained.
  • this step please refer to the relevant content of the above steps 201 to 203, which will not be repeated here.
  • the damage region and the damage category corresponding to the damage region of the aeroengine borehole image are obtained. For example: according to the damage category of each pixel, obtain the distribution of various damage categories on an aeroengine scoring image, and extract the pixels of the same damage category to obtain the area corresponding to the damage category.
  • This embodiment uses a fully convolutional neural network to intelligently identify the damage area and the damage category corresponding to the damage area in the aeroengine borehole image, which effectively improves the work efficiency and accuracy of the existing human-based recognition method.
  • it can not only help the drilling personnel locate damage and improve the efficiency of drilling, but also help the drilling personnel find some damage that is difficult to find or often overlooked by the labor (that is, it can help the human to identify the undetected Damage area) can further improve the accuracy of the drilling process and reduce the influence of subjective factors during the drilling process. It can work efficiently for a long time, reduce the consumption of manpower, reduce the probability of misjudgment and missed judgment of staff under fatigue work, and improve the recognition accuracy.
  • An embodiment of the present invention also provides a deep learning-based intelligent identification device for aero-engine blasting image damage, which is used to perform the above-mentioned intelligent identification method, which specifically includes:
  • the first acquisition module is used to acquire the network weights of the fully convolutional neural network that meets the preset accuracy requirements on the test set, where the test set is a plurality of aeroengine sounding marker images, and the aeroengine sounding marker images are The tester marked the damage area and the aeroengine borehole image corresponding to the damage area.
  • the second acquisition module is used to acquire the aerial engine borehole image.
  • the pre-processing module is used to pre-process the aero-engine drilling images to obtain pre-processed images that meet the input requirements of the fully convolutional neural network.
  • Fully convolutional neural network module used to load the network weights to initialize the fully convolutional neural network, and use the initialized fully convolutional neural network to process the preprocessed image to obtain the damage area of the aeroengine penetration image and the corresponding damage area Damage category.
  • the fully convolutional neural network module is used to process the preprocessed image using the initialized fully convolutional neural network to obtain the damage area and the damage category corresponding to the damage area of the aeroengine borehole image, specifically used for:
  • the convolutional structure of the fully convolutional neural network after initialization performs feature extraction on the preprocessed image to obtain the image feature tensor;
  • the deconvolution structure of the fully convolutional neural network after initialization is used to increase the dimension of the image feature tensor to obtain the aviation
  • Each pixel in the engine penetration image is the probability of various damage categories; according to the probability that each pixel is each damage category, the damage category of each pixel is obtained; according to the damage category of each pixel, the aeroengine penetration image is obtained Damage area and the damage category corresponding to the damage area.
  • the first acquisition module the second acquisition module, the pre-processing module, and the fully convolutional neural network module
  • steps 101 to 105 and steps 201 to 207 please refer to the relevant descriptions of steps 101 to 105 and steps 201 to 207 in the above embodiment, which will not be repeated here. Repeat.
  • the embodiment of the present invention uses a fully convolutional neural network to intelligently identify the damaged area in the aeroengine drilling image, which effectively improves the working efficiency and accuracy of the existing human-based recognition method; To assist locating personnel in locating damage and improving the efficiency of grouting, it can also help locating personnel find some damage that is difficult to find or often overlooked by humans (that is, it can help artificially identify undetected damage areas), which can further improve
  • the accuracy of the drilling process reduces the influence of subjective factors in the drilling process; it can work efficiently for a long time, reduce the consumption of manpower, reduce the probability of misjudgment and missed judgment of staff under fatigue work, and improve the recognition Precision.
  • the intelligent identification device provided in the above embodiments only uses the division of the above functional modules as examples for identification.
  • the above functions can be allocated by different functional modules according to needs, that is, the system
  • the internal structure of is divided into different functional modules to complete all or part of the functions described above, such as dividing the preprocessing module and the fully convolutional neural network module into fully convolutional neural network modules.
  • the smart identification device and the smart identification method provided in the above embodiments belong to the same concept. For the specific implementation process, refer to the method embodiments, and details are not described here.
  • An embodiment of the present invention also provides a deep learning-based intelligent identification device for aeroengine pitting image damage, which specifically includes: an image acquisition device, a processor, and a memory for storing executable instructions of the processor.
  • the processor is configured to obtain the network weights of the fully convolutional neural network that meets the preset accuracy requirements on the test set, where the test set is a plurality of aeroengine sounding marker images, and the aeroengine sounding marker images are by a tester Marked the damage area and the aero-engine penetration image corresponding to the damage area; loaded the network weights to initialize the fully convolutional neural network; obtained the aero-engine penetration image through the image acquisition device; pre-processed the aero-engine penetration image, Obtain the preprocessed image that meets the input requirements of the full convolutional neural network; use the initialized full convolutional neural network to process the preprocessed image to obtain the damage area and damage category corresponding to the damage area of the aeroengine borehole image.
  • the image acquisition device may be a camera.
  • An embodiment of the present invention also provides a storage medium, when the instructions in the storage medium are executed by the processing component of the deep learning-based aero-engine puncturing image damage intelligent recognition device, which enables the intelligent recognition until the deep learning-based learning can be performed Intelligent identification method of aeroengine borehole image damage.
  • the processing component includes a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于深度学习的航空发动机孔探图像损伤智能识别方法,属于航空发动机损伤识别领域。方法包括:获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重(101),测试集为多个航空发动机孔探标记图像;加载网络权重以初始化全卷积神经网络(102);获取航空发动机孔探图像(103);对航空发动机孔探图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像(104);使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别(105)。通过上述方法能够智能识别出孔探图像中的损伤区域和对应类别,从而提高孔探效率、提高孔探过程的精度、降低孔探过程中人为主观因素的影响。

Description

基于深度学习的航空发动机孔探图像损伤智能识别方法 技术领域
本发明属于航空发动机损伤识别技术领域,特别涉及一种基于深度学习的航空发动机孔探图像损伤智能识别方法。
背景技术
发动机作为飞机中的核心部件,对于飞行安全具有重要影响。发动机工作时内部温度高、压强大,因此发动机内部结构常会出现多种损伤,如裂缝、烧穿等。如果不能及时发现这些损伤,将会对民航飞行安全造成重大的威胁。因此,民航公司使用多种检测方式,来及时发现发动机结构的安全隐患。
发动机孔探技术是重要的检测手段之一。孔探技术人员使用孔探摄像头伸入发动机中,拍摄发动机内部的照片、视频等,并在对应的照片、视频中寻找裂缝、烧穿等损伤,最终形成孔探报告,为进一步的维修、维护工作提供指导。但是孔探技术往往耗时耗力,对一台发动机的孔探往往要耗费数十小时之久。并且受到孔探人员主观因素的影响,其准确率有限。随着我国经济发展、城市化进程加快,近年来国内、国外航线出现了迅速增长。传统的孔探技术由于效率、精度有限,人力成本高,越发地不能满足当前高涨的发动机孔探需求。
发明内容
为了解决上述问题,本发明提供了一种一种基于深度学习的航空发动机孔探图像损伤智能识别方法,其包括:获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,所述测试集为多个航空发动机孔探标记图像,所述航空发动机孔探标记图像为经试验人员标记了损伤区域和与所述损伤区域对应损伤类别的航空发动机孔探图像;加载所述网络权重以初始化所述全卷积神经网络;获取航空发动机孔探图像;对所述航空发动机孔探图像进行预处理,得到符合所述全卷积神经网络输入要求的预处理图像;使用初始化 后的所述全卷积神经网络对所述预处理图像进行处理,得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别。
在如上所述的方法中,优选地,所述使用初始化后的所述全卷积神经网络对所述预处理图像进行处理,得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别,具体包括:使用初始化后的所述全卷积神经网络的卷积结构对所述预处理图像进行特征提取得到图像特征张量;使用初始化后的所述全卷积神经网络的反卷积结构对所述图像特征张量进行升维处理得到所述航空发动机孔探图像中每个像素分别为各种损伤类别的概率;根据每个像素分别为各种损伤类别的概率得到每个像素的损伤类别;根据每个像素的损伤类别得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别。
在如上所述的方法中,优选地,所述获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,具体包括:获取多个航空发动机孔探标记图像;将多个所述航空发动机孔探标记图像按比例划分为测试集和训练集,对所述训练集中的航空发动机孔探标记图像进行预处理;构建并初始化全卷积神经网络;使用经预处理后的训练集训练经初始化后的全卷积神经网络,得到训练后的网络权重;使用所述测试集验证用训练后的网络权重更新的所述全卷积神经网络是否有效,如果验证为有效,则所述训练后的网络权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网络的网络权重。
在如上所述的方法中,优选地,在所述将多个所述航空发动机孔探标记图像按比例划分为测试集和训练集之后,所述方法还包括:对所述训练集中的各航空发动机孔探标记图像进行数据增强处理,得到航空发动机孔探增强图像;对应地,所述训练集中的图像包括:航空发动机孔探标记图像和与所述航空发动机孔探标记图像对应的航空发动机孔探增强图像。
在如上所述的方法中,优选地,所述构建并初始化全卷积神经网络,具体包括:搭建所述全卷积神经网络中的卷积结构,所述卷积结构用于对接收的所述航空发动机孔探标记图像进行特征提取以得到图像特征张量;搭建所述全卷积神经网络中的反卷积结构,所述反卷积结构用于对接收的所述图像特征张量进行升维处理以得到所述航空发动机孔探标记图像中每个像素分别 为各种损伤类别的概率;使用预训练权重对所述卷积结构进行初始化,所述预训练权重由所述卷积结构在公开的图像数据集上经训练得到;初始化反卷积结构。
在如上所述的方法中,优选地,所述卷积结构包括多个卷积块,每个所述卷积块包括配第一种激活函数的卷积层以及池化层;所述反卷积结构包括反卷积层和配第二种激活函数的卷积层;所述第一种激活函数和所述第二种激活函数为不同的激活函数。
在如上所述的方法中,优选地,所述卷积结构包括:5个卷积块,每个所述卷积块为两个连续的配relu激活函数的卷积层加上一个池化层的结构;所述反卷积结构包括:反卷积层和一个配sigmoid激活函数的卷积层。
在如上所述的方法中,优选地,所述对航空发动机孔探图像进行预处理,具体包括:将所述航空发动机孔探图像的尺寸缩放为符合所述全卷积神经网络对尺寸的输入要求;对缩放后的图像进行标准化以将所述缩放后的图像的所有像素的均值变为0和方差变为1。
在如上所述的方法中,优选地,所述使用经预处理后的训练集训练经初始化后的全卷积神经网络,得到训练后的网络权重,具体包括:将训练集分为多批,每批包含N张所述航空发动机孔探标记图像;对初始化后的所述全卷积神经网络重复执行训练步骤以遍历多批,直至目标函数的值符合预设条件,将与符合预设条件的目标函数的值对应的网络权重作为训练后的网络权重;所述训练步骤具体包括:预测每批中各张所述航空发动机孔探标记图像中的各个像素分别为不同损伤类别的概率;根据各个像素分别为不同损伤类别的概率得到每个像素预测的损伤类别;获取表示各像素预测的损伤类别和经试验人员标记的损伤类别差距的值;将每批中所有所述航空发动机孔探标记图像中所有像素预测的损伤类别和经试验人员标记的损伤类别差距的值的平均值,作为目标函数;基于反向传播法,根据目标函数来计算所述全卷积神经网络中各权重变化的梯度,并使用最优化方法,根据计算得到的梯度值来调整所述全卷积神经网络中各权重的值;其中,N为大于等于1的正整数。
在如上所述的方法中,优选地,所述使用测试集验证用训练后的网络权重更新的所述全卷积神经网络是否有效,如果验证为有效,则所述训练后的网络权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网 络的网络权重,具体包括:预设评价指标:对于一张所述航空发动机孔探标记图像的预测结果,有像素准确率PA和平均重合率mIOU两个评价指标,PA=∑ in ii/∑ it i
Figure PCTCN2019095290-appb-000001
其中,n ab为全卷积神经网络将损伤类别为a的像素预测为损伤类别b的像素的数量,在式中a、b取i或j,t i为试验人员标记损伤类别为i的像素数量,t i满足t i=∑ jn ij,n cl是标记类别中包含的损伤种类数量,∑ jn ji表示预测为第i类损伤类别的所有像素的数量,
Figure PCTCN2019095290-appb-000002
表示对于损伤类别i,经试验人员标记的损伤区域以及预测的损伤区域的重合度,
Figure PCTCN2019095290-appb-000003
表示对于所有的损伤类别进行求和;对所述测试集中各航空发动机孔探标记图像进行预处理,得到符合所述全卷积神经网络输入要求的预处理图像;根据用训练后的网络权重更新的所述全卷积神经网络对所述预处理图像进行预测得到所述航空发动机孔探标记图像中每个像素为各种损伤类别的概率,然后得到每个像素预测的损伤类别;根据所述航空发动机孔探标记图像中每个像素标记的损伤类别和预测的损伤类别,计算所述航空发动机孔探标记图像的PA和mIOU;判断所述测试集中所有所述航空发动机孔探标记图像平均的PA和mIOU是否符合预设阈值要求;若判断为符合,则将所述全卷积神经网络此时的权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网络的网络权重。
本发明实施例提供的技术方案带来的有益效果是:
采用全卷积神经网络对航空发动机孔探图像中的损伤区域进行智能识别,有效地提高了现有基于人力识别方法的工作效率和准确率;在孔探过程中不仅能起到辅助孔探人员定位损伤、提高孔探效率的作用,还能够协助孔探人员发现一些人工较难发现或常被忽略的损伤(即能够辅助人工识别出未发现的损伤区域),能够进一步提高孔探过程的精度,降低了孔探过程中人为主观因素的影响;能够长期高效地工作,降低了人力的消耗,减少了工作人员在疲劳工作下误判、漏判损伤的概率,提高了识别精度。
附图说明
图1是本发明实施例提供的一种基于深度学习的航空发动机孔探图像损 伤智能识别方法的流程示意图。
图2是本发明实施例提供的另一种基于深度学习的航空发动机孔探图像损伤智能识别方法的流程示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
需要说明的是:下述的损伤类别既包括有损伤的类别,如裂缝、烧穿;又包括无损伤的类别,即无损伤。
参见图1,本发明一实施例提供了一种基于深度学习的航空发动机孔探图像损伤智能识别方法,其包括如下步骤:
步骤101,获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,其中,测试集为多个航空发动机孔探标记图像,航空发动机孔探标记图像为经试验人员标记了损伤区域和与损伤区域对应损伤类别的航空发动机孔探图像。
步骤102,加载网络权重以初始化全卷积神经网络。
步骤103,获取航空发动机孔探图像。
步骤104,对航空发动机孔探图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像。
步骤105,使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别。
本实施例通过采用全卷积神经网络对航空发动机孔探图像中的损伤区域及损伤区域对应的损伤类别进行智能识别,有效地提高了现有基于人力识别方法的工作效率和准确率。在孔探过程中不仅能起到辅助孔探人员定位损伤、提高孔探效率的作用,还能够协助孔探人员发现一些人工较难发现或常被忽略的损伤(即能够辅助人工识别出未发现的损伤区域),能够进一步提高孔探过程的精度,降低了孔探过程中人为主观因素的影响;它能够长期高效地工作,降低了人力的消耗,减少了工作人员在疲劳工作下误判、漏判损伤的概率,提高了识别精度。
参见图2,本发明另一实施例提供了一种基于深度学习的航空发动机孔探图像损伤智能识别方法,其包括如下步骤:
步骤201,获取测试集和训练集,对训练集中图像进行预处理,得到符合全卷积神经网络输入要求的图像。
具体地,首先,获得多个航空发动机孔探图像,标记损伤区域及其对应的损伤类别,此时将标记后的图像称为航空发动机孔探标记图像。例如:试验人员通过现场拍摄或收集历史图像的方法获得多个航空发动机孔探图像,在每个图像中使用几何形状标记出损伤区域及其对应的损伤类别,几何形状可以为多边形,还可以为其他形状。标记时试验人员(或称孔探技术专业人员或称航空发动机孔探图像损伤识别领域的专家人员)通过逐个标记多边形的顶点,首尾顺次连接各顶点得到多边形的损伤区域。标记的损伤区域,其类别可能为裂缝或者烧穿。需要说明的是,在一张图像中,可能出现多个损伤区域,还可能同时出现不同种类损伤区域。一般损伤区域不会有重合;如有重合,则将重合区域标记为裂缝损伤。在标记过程中,试验人员只会标记有损伤的区域,未标记的图像区域则被默认设置为无损伤区域。
然后,将多个航空发动机孔探标记图像按比例划分为测试集和训练集,即由多个航空发动机孔探标记图像的一部分形成测试集,由多个航空发动机孔探图像的另一部分形成训练集,比例可以为80%和20%的比例,还可以为其他的比例,本实施例对此不进行限定。
再次,对训练集中各图像进行预处理,得到符合下述全卷积神经网络输入要求的图像,其包括:首先将航空发动机孔探标记图像的尺寸缩放为符合全卷积神经网络对尺寸的输入要求。以全卷积神经网络对图像尺寸的输入要求为32的倍数为例进行说明,例如:以高×宽表示的尺寸为576×768。然后对缩放后的图像进行标准化以将缩放后的图像的所有像素的均值变为0和方差变为1。例如:采用公式x′=x/127.5-1调整图像像素值,其中,x为图像中各点像素值(即标准化前各点像素值),x′为处理后图像各点像素值(即标准化后各点像素值)。
为了优化全卷积神经网络的训练结果,所述方法还包括:对训练集中的各航空发动机孔探标记图像进行数据增强处理,得到航空发动机孔探增强图像,数据增强处理后,训练集中包括:航空发动机孔探标记图像和与航空发 动机孔探标记图像对应的航空发动机孔探增强图像。数据增强处理的方法可以为:对航空发动机标记图像进行翻转处理,例如:水平方向翻转和/或竖直方向翻转和/或水平竖直方向翻转,水平竖直方向翻转表示既进行水平方向翻转又进行竖直方向翻转。在其他的实施例中,还可以为旋转处理,本实施例对此不进行限定。
步骤202,构建并初始化全卷积神经网络。
具体地,首先,搭建全卷积神经网络中的卷积结构,卷积结构用于对航空发动机孔探标记图像进行特征提取以得到图像特征张量。卷积结构包括:多个卷积块,每个卷积块包括:配第一种激活函数的卷积层以及池化层。下面以卷积块数量为5,卷积层数量为2为例对卷积结构进行说明。卷积结构包括:5个卷积块,每个卷积块为conv+relu+conv+relu+pooling结构,即两个连续的配relu激活函数的卷积层加上一个池化层的结构。卷积层的核大小和步长分别为3×3及1,池化层的核大小和步长分别为2×2和2,第一种激活函数为relu激活函数。5个卷积块将输入图像尺寸缩小了32倍。需要说明的是,卷积结构的具体构成可以根据实际情况进行调整,本实施例不对卷积块的数量进行限制,也不对卷积块的具体结构进行限制。
然后,搭建全卷积神经网络中的反卷积结构,反卷积结构用于对图像特征张量进行升维处理以得到航空发动机孔探标记图像中每个像素分别为各种损伤类别的概率。反卷积结构包括:反卷积层和配第二种激活函数的卷积层;第二种激活函数和第一种激活函数为不同的激活函数。下面以卷积层的数量为1为例对反卷积结构进行说明。反卷积结构包括:反卷积层和一个配sigmoid激活函数的卷积层。反卷积层的核大小和步长分别为64×64和32,卷积层的核大小和步长分别为1×1及1,第二种激活函数为sigmoid激活函数。该反卷积结构将输入图像尺寸扩大了32倍。需要说明的是,反卷积结构的具体构成可以根据实际情况进行调整,本实施例不对卷积块的数量进行限制,也不对卷积块的具体结构进行限制。
再次,初始化全卷积神经网络,其包括:初始化全卷积神经网络的卷积结构和初始化全卷积神经网络的反卷积结构。初始化卷积结构可以通过使用随机噪声来初始化卷积结构的各权重实现,初始化反卷积结构的反卷积层可以通过使用双线性插值变换矩阵来初始化反卷积层的权重来实现,初始化反 卷积结构的卷积层可以通过使用随机噪声来初始换该卷积层的权重来实现。随机噪声可以采用正态分布的随机噪声。
为了使得全卷积神经网络更快地收敛,使用预训练权重对卷积结构的各权重进行初始化。预训练权重由卷积结构在公开的图像数据集上经训练得到。公开的图像数据集例如可以为ImageNet图像数据,其是一个用于视觉对象识别算法研究的大型图像数据集。
步骤203,使用经预处理后的训练集训练经初始化后的全卷积神经网络,得到训练后的网络权重。
具体的,首先,将经预处理后的训练集中所有航空发动机标记图像分为多批,每批包含N张图像,即将经预处理后的训练集分为大小为N的批,N为大于等于1的自然数。当N=1时,由于包含1张图像,可以不对处理后的训练集进行批划分。
然后,执行训练步骤,其包括:预测一批中各张航空发动机孔探标记图像中的每个像素分别为不同损伤类别的概率。根据每个像素分别为不同损伤类别的概率得到每个像素的损伤类别,此时得到的损伤类别为预测损伤类别,例如:选取概率最大的损伤类别作为该像素的损伤类别。获取表示各像素的预测损伤类别和标记损伤类别差距的值。应用中,可以通过交叉熵来计算得到该值,标记损伤类别为由试验人员标记的损伤类别。将该批中所有航空发动机孔探标记图像中所有像素表示预测损伤类别和标记损伤类别差距的值的平均值,作为目标函数,当采用交叉熵来计算得到该值时,此时的目标函数可称之为交叉熵函数。基于反向传播法,根据目标函数来计算全卷积神经网络中各权重变化的梯度,即基于目标函数,使用反向传播法来计算全卷积神经网络中各权重变化的梯度值,并使用最优化方法,根据计算得到的梯度值来更新(即修改或调整)全卷积神经网络中各权重的值。最优化方法为机器学习中的优化方法,其可以为随机梯度下降法或RMSPROP方法或ADAM方法。
其中,目标函数L可以表示为:
Figure PCTCN2019095290-appb-000004
其中N是每批的图像数量,H是图像的高,W是图像的宽,C代表图像的通道数,
Figure PCTCN2019095290-appb-000005
计 算了两个输入的差距值,即差距函数,Y nijc
Figure PCTCN2019095290-appb-000006
分别代表试验人员标记的和全卷积神经网络预测的每批中第n个图像、第c个通道中位置为(i,j)的像素的损伤类别。
再次,依次对多批中的一个批执行上述训练步骤,直至目标函数的值符合预设条件,将与符合预设条件的目标函数的值对应的网络权重作为训练后的网络权重。预设条件可以为目标函数的值不再降低或目标函数的值小余预设目标函数值,如10 -5、10 -6
步骤204,使用训练集验证用训练后的网络权重更新的全卷积神经网络是否有效,如果验证为有效,则训练后的网络权重作为在测试集上达到预设准确率要求的全卷积神经网络的网络权重。
具体地,使用测试集和预设的评价指标验证用训练后的网络权重更新的全卷积神经网络是否有效。评价指标用来验证全卷积神经网络是否有效,其可以为下述的像素准确率PA和平均重合率mIOU,还可以根据不同的应用需求,使用其他的评价指标来衡量全卷积神经网络的有效性,例如根据各个像素的预测结果,计算得到的查准率(precision)或查全率(recall)。下面以像素准确率PA和平均重合率mIOU为例对验证过程进行具体说明。
首先,预设评价指标:像素准确率和平均重合率。像素准确率PA(Pixel Accuracy)=∑ in ii/∑ it i,平均重合率mIOU(mean Intersection over Union)表示所有损伤类别的预测的损伤区域以及实际标记(即试验人员标记)的损伤区域重合度的平均值,
Figure PCTCN2019095290-appb-000007
其中,n ab为全卷积神经网络将损伤类别为a的像素预测为损伤类别b的像素的数量,在式中a、b取i或j,t i为试验人员标记损伤类别为i的像素数量,t i满足t i=∑ jn ij,,n cl是标记类别中包含的损伤种类数量,∑ jn ji表示预测为第i类损伤类别的所有像素的数量,
Figure PCTCN2019095290-appb-000008
表示对于损伤类别为i的重合度,
Figure PCTCN2019095290-appb-000009
表示对于所有的损伤类别进行求和。
然后,对测试集中各航空发动机孔探标记图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像,用训练后的网络权重进行网络权重更新的全卷积神经网络对预处理图像进行预测得到航空发动机孔探标记图像中 每个像素为各种损伤类别的概率,进而得到每个像素预测的损伤类别。关于该步骤的具体描述内容可参见上述步骤201~203的相关内容,此处不再一一赘述。
再次,根据航空发动机孔探标记图像中每个像素标记的损伤类别和预测的损伤类别,计算航空发动机孔探标记图像的PA和mIOU,判断测试集中所有图像平均的PA和m IOU是否符合各自的预设阈值要求,该预设阈值通常由试验人员设定;若判断为符合,则将全卷积神经网络网络此时的权重作为在测试集上达到预设准确率要求的全卷积神经网络的网络权重,即认为全卷积神经网络可用。
若判断为不符合,则重新选择超参数,重新训练,即依次对多批中的一个批执行上述训练步骤,直至目标函数的值符合预设条件,将与符合预设条件的目标函数的值对应的网络权重作为训练后的网络权重,然后执行步骤204。超参数包括批的大小、最优化方法的选择、以及最优化方法对应的参数。如将N由2调整为3,将最优化方法由随机梯度下降法调整为ADAM方法,并对应的修改最优化方法对应的参数。如果通过调整超参数仍然无法得到在所述测试集上达到预设准确率要求的全卷积神经网络的网络权重,则在原有训练集的基础上,再收集更多训练数据,即再收集多个航空发动机孔探图像,然后再进行训练,即执行步骤203~204。
步骤205,加载网络权重以初始化全卷积神经网络。
步骤206,获取航空发动机孔探图像,对航空发动机孔探图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像。关于该步骤的具体描述内容可参见上述步骤201~203的相关内容,此处不再一一赘述。
步骤207,使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别。
具体地,首先,使用初始化后的全卷积神经网络的卷积结构对预处理图像进行特征提取得到图像特征张量;使用初始化后的全卷积神经网络的反卷积结构对图像特征张量进行升维处理得到航空发动机孔探图像中每个像素分别为各种损伤类别的概率;根据每个像素分别为各种损伤类别的概率得到每个像素的损伤类别。关于该步骤的具体描述内容可参见上述步骤201~203的相关内容,此处不再一一赘述。
其次,根据每个像素的损伤类别得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别。例如:根据每个像素的损伤类别,得到一张航空发动机孔探标记图像上各种损伤类别的分布,将相同损伤类别的像素提取出来,则得到该损伤类别对应的区域。
本实施例通过采用全卷积神经网络对航空发动机孔探图像中的损伤区域及损伤区域对应的损伤类别进行智能识别,有效地提高了现有基于人力识别方法的工作效率和准确率。在孔探过程中不仅能起到辅助孔探人员定位损伤、提高孔探效率的作用,还能够协助孔探人员发现一些人工较难发现或常被忽略的损伤(即能够辅助人工识别出未发现的损伤区域),能够进一步提高孔探过程的精度,降低了孔探过程中人为主观因素的影响。能够长期高效地工作,降低了人力的消耗,减少了工作人员在疲劳工作下误判、漏判损伤的概率,提高了识别精度。
本发明一实施例还提供了一种基于深度学习的航空发动机孔探图像损伤智能识别装置,用于执行上述智能识别方法,其具体包括:
第一获取模块,用于获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,其中,测试集为多个航空发动机孔探标记图像,航空发动机孔探标记图像为经试验人员标记了损伤区域和与损伤区域对应损伤类别的航空发动机孔探图像。
第二获取模块,用于获取航空发动机孔探图像。
预处理模块,用于对航空发动机孔探图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像。
全卷积神经网络模块,用于加载网络权重以初始化全卷积神经网络,使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别。
优选地,全卷积神经网络模块用于使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别,具体用于:使用初始化后的全卷积神经网络的卷积结构对预处理图像进行特征提取得到图像特征张量;使用初始化后的全卷积神经网络的反卷积结构对图像特征张量进行升维处理得到航空发动机孔探图像中每个像 素分别为各种损伤类别的概率;根据每个像素分别为各种损伤类别的概率得到每个像素的损伤类别;根据每个像素的损伤类别得到航空发动机孔探图像的损伤区域和与损伤区域对应的损伤类别。
关于第一获取模块、第二获取模块、预处理模块和全卷积神经网络模块的实施方式可参见上述实施例中的步骤101~105以及步骤201~207的相关描述,此处不再一一赘述。
本发明实施例通过采用全卷积神经网络对航空发动机孔探图像中的损伤区域进行智能识别,有效地提高了现有基于人力识别方法的工作效率和准确率;在孔探过程中不仅能起到辅助孔探人员定位损伤、提高孔探效率的作用,还能够协助孔探人员发现一些人工较难发现或常被忽略的损伤(即能够辅助人工识别出未发现的损伤区域),能够进一步提高孔探过程的精度,降低了孔探过程中人为主观因素的影响;能够长期高效地工作,降低了人力的消耗,减少了工作人员在疲劳工作下误判、漏判损伤的概率,提高了识别精度。需要说明的是:上述实施例提供的智能识别装置在识别时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将系统的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能,如将预处理模块和全卷积神经网络模块划分为全卷积神经网络模块。另外,上述实施例提供的智能识别装置和智能识别方法属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本发明一实施例还提供了一种基于深度学习的航空发动机孔探图像损伤智能识别装置,其具体包括:图像采集装置、处理器和用于存储处理器的可执行指令的存储器。
处理器被配置为获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,其中,测试集为多个航空发动机孔探标记图像,航空发动机孔探标记图像为经试验人员标记了损伤区域和与损伤区域对应损伤类别的航空发动机孔探图像;加载网络权重以初始化全卷积神经网络;通过图像采集装置获取航空发动机孔探图像;对航空发动机孔探图像进行预处理,得到符合全卷积神经网络输入要求的预处理图像;使用初始化后的全卷积神经网络对预处理图像进行处理,得到航空发动机孔探图像的损伤区域和与损伤区域对 应的损伤类别。图像采集装置可以为摄像头。
关于图像采集装置和处理器的具体描述可参见上述实施例中步骤101~105和201~207的相关内容,此处不再一一赘述。
本发明一实施例还提供了一种存储介质,当存储介质中的指令由基于深度学习的航空发动机孔探图像损伤智能识别装置的处理组件执行时,使得本智能识别直至能够执行上述基于深度学习的航空发动机孔探图像损伤智能识别方法。处理组件包括处理器。
由技术常识可知,本发明可以通过其它的不脱离其精神实质或必要特征的实施方案来实现。因此,上述公开的实施方案,就各方面而言,都只是举例说明,并不是仅有的。所有在本发明范围内或在等同于本发明的范围内的改变均被本发明包含。

Claims (10)

  1. 一种基于深度学习的航空发动机孔探图像损伤智能识别方法,其特征在于,所述方法包括:
    获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,所述测试集为多个航空发动机孔探标记图像,所述航空发动机孔探标记图像为经试验人员标记了损伤区域和与所述损伤区域对应损伤类别的航空发动机孔探图像;
    加载所述网络权重以初始化所述全卷积神经网络;
    获取航空发动机孔探图像;
    对所述航空发动机孔探图像进行预处理,得到符合所述全卷积神经网络输入要求的预处理图像;
    使用初始化后的所述全卷积神经网络对所述预处理图像进行处理,得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别。
  2. 根据权利要求1所述的方法,其特征在于,所述使用初始化后的所述全卷积神经网络对所述预处理图像进行处理,得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别,具体包括:
    使用初始化后的所述全卷积神经网络的卷积结构对所述预处理图像进行特征提取得到图像特征张量;
    使用初始化后的所述全卷积神经网络的反卷积结构对所述图像特征张量进行升维处理得到所述航空发动机孔探图像中每个像素分别为各种损伤类别的概率;
    根据每个像素分别为各种损伤类别的概率得到每个像素的损伤类别;
    根据每个像素的损伤类别得到所述航空发动机孔探图像的损伤区域和与所述损伤区域对应的损伤类别。
  3. 根据权利要求1所述的方法,其特征在于,所述获取在测试集上达到预设准确率要求的全卷积神经网络的网络权重,具体包括:
    获取多个航空发动机孔探标记图像;
    将多个所述航空发动机孔探标记图像按比例划分为测试集和训练集,对 所述训练集中的航空发动机孔探标记图像进行预处理;
    构建并初始化全卷积神经网络;
    使用经预处理后的训练集训练经初始化后的全卷积神经网络,得到训练后的网络权重;
    使用所述测试集验证用训练后的网络权重更新的所述全卷积神经网络是否有效,如果验证为有效,则所述训练后的网络权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网络的网络权重。
  4. 根据权利要求3所述的方法,其特征在于,在所述将多个所述航空发动机孔探标记图像按比例划分为测试集和训练集之后,所述方法还包括:
    对所述训练集中的各航空发动机孔探标记图像进行数据增强处理,得到航空发动机孔探增强图像;
    对应地,所述训练集中的图像包括:航空发动机孔探标记图像和与所述航空发动机孔探标记图像对应的航空发动机孔探增强图像。
  5. 根据权利要求3所述的方法,其特征在于,所述构建并初始化全卷积神经网络,具体包括:
    搭建所述全卷积神经网络中的卷积结构,所述卷积结构用于对接收的所述航空发动机孔探标记图像进行特征提取以得到图像特征张量;
    搭建所述全卷积神经网络中的反卷积结构,所述反卷积结构用于对接收的所述图像特征张量进行升维处理以得到所述航空发动机孔探标记图像中每个像素分别为各种损伤类别的概率;
    使用预训练权重对所述卷积结构进行初始化,所述预训练权重由所述卷积结构在公开的图像数据集上经训练得到;
    初始化反卷积结构。
  6. 根据权利要求5所述的方法,其特征在于,
    所述卷积结构包括多个卷积块,每个所述卷积块包括:配第一种激活函数的卷积层以及池化层;
    所述反卷积结构包括:反卷积层和配第二种激活函数的卷积层;
    所述第一种激活函数和所述第二种激活函数为不同的激活函数。
  7. 根据权利要求6所述的方法,其特征在于,所述卷积结构包括:5个卷积块,每个所述卷积块为两个连续的配relu激活函数的卷积层加上一个池化层的结构;
    所述反卷积结构包括:反卷积层和一个配sigmoid激活函数的卷积层。
  8. 根据权利要求1所述的方法,其特征在于,所述对航空发动机孔探图像进行预处理,具体包括:
    将所述航空发动机孔探图像的尺寸缩放为符合所述全卷积神经网络对尺寸的输入要求;
    对缩放后的图像进行标准化以将所述缩放后的图像的所有像素的均值变为0和方差变为1。
  9. 根据权利要求3所述的方法,其特征在于,所述使用经预处理后的训练集训练经初始化后的全卷积神经网络,得到训练后的网络权重,具体包括:
    将训练集分为多批,每批包含N张所述航空发动机孔探标记图像;
    对初始化后的所述全卷积神经网络重复执行训练步骤以遍历多批,直至目标函数的值符合预设条件,将与符合预设条件的目标函数的值对应的网络权重作为训练后的网络权重;
    所述训练步骤具体包括:
    预测每批中各张所述航空发动机孔探标记图像中的各个像素分别为不同损伤类别的概率;
    根据各个像素分别为不同损伤类别的概率得到每个像素预测的损伤类别;
    获取表示各像素预测的损伤类别和经试验人员标记的损伤类别差距的值;
    将每批中所有所述航空发动机孔探标记图像中所有像素预测的损伤类别和经试验人员标记的损伤类别差距的值的平均值,作为目标函数;
    基于反向传播法,根据所述目标函数来计算所述全卷积神经网络中各权重变化的梯度,并使用最优化方法,根据计算得到的梯度值来调整所述全卷积神经网络中各权重的值;
    其中,N为大于等于1的正整数。
  10. 根据权利要求3所述的方法,其特征在于,所述使用所述测试集验证用训练后的网络权重更新的所述全卷积神经网络是否有效,如果验证为有效,则所述训练后的网络权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网络的网络权重,具体包括:
    预设评价指标:对于一张所述航空发动机孔探标记图像的预测结果,有像素准确率PA和平均重合率mIOU两个评价指标,
    Figure PCTCN2019095290-appb-100001
    Figure PCTCN2019095290-appb-100002
    其中,n ab为全卷积神经网络将损伤类别为a的像素预测为损伤类别b的像素的数量,在式中a、b取i或j,t i为试验人员标记损伤类别为i的像素数量,t i满足t i=∑ jn ij,n cl是标记类别中包含的损伤种类数量,∑ jn ji表示预测为第i类损伤类别的所有像素的数量,
    Figure PCTCN2019095290-appb-100003
    表示对于损伤类别i,经试验人员标记的损伤区域以及预测的损伤区域的重合度,
    Figure PCTCN2019095290-appb-100004
    表示对于所有的损伤类别进行求和;
    对所述测试集中各航空发动机孔探标记图像进行预处理,得到符合所述全卷积神经网络输入要求的预处理图像;
    根据用训练后的网络权重更新的所述全卷积神经网络对所述预处理图像进行预测得到所述航空发动机孔探标记图像中每个像素为各种损伤类别的概率,然后得到每个像素预测的损伤类别;
    根据所述航空发动机孔探标记图像中每个像素标记的损伤类别和预测的损伤类别,计算所述航空发动机孔探标记图像的PA和mIOU,判断所述测试集中所有所述航空发动机孔探标记图像平均的PA和mIOU是否符合预设阈值要求;
    若判断为符合,则将所述全卷积神经网络此时的权重作为在所述测试集上达到所述预设准确率要求的所述全卷积神经网络的网络权重。
PCT/CN2019/095290 2018-12-13 2019-07-09 基于深度学习的航空发动机孔探图像损伤智能识别方法 WO2020119103A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201811526577.0 2018-12-13
CN201811526577 2018-12-13
CN201910048264.7A CN109800708A (zh) 2018-12-13 2019-01-18 基于深度学习的航空发动机孔探图像损伤智能识别方法
CN201910048264.7 2019-01-18

Publications (1)

Publication Number Publication Date
WO2020119103A1 true WO2020119103A1 (zh) 2020-06-18

Family

ID=66559637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095290 WO2020119103A1 (zh) 2018-12-13 2019-07-09 基于深度学习的航空发动机孔探图像损伤智能识别方法

Country Status (2)

Country Link
CN (1) CN109800708A (zh)
WO (1) WO2020119103A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111965183A (zh) * 2020-08-17 2020-11-20 沈阳飞机工业(集团)有限公司 基于深度学习的钛合金显微组织检测方法
CN113034599A (zh) * 2021-04-21 2021-06-25 南京航空航天大学 一种航空发动机的孔探检测装置和方法
CN113744230A (zh) * 2021-08-27 2021-12-03 中国民航大学 一种基于无人机视觉的飞机蒙皮损伤智能检测方法
CN114120317A (zh) * 2021-11-29 2022-03-01 哈尔滨工业大学 基于深度学习和图像处理的光学元件表面损伤识别方法
CN114240948A (zh) * 2021-11-10 2022-03-25 西安交通大学 一种结构表面损伤图像的智能分割方法及系统
CN115114860A (zh) * 2022-07-21 2022-09-27 郑州大学 一种面向混凝土管道损伤识别的数据建模扩增方法
CN116579135A (zh) * 2023-04-14 2023-08-11 中国航发沈阳发动机研究所 一种航空发动机红外隐身性能快速确定方法
CN116579135B (zh) * 2023-04-14 2024-06-07 中国航发沈阳发动机研究所 一种航空发动机红外隐身性能快速确定方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800708A (zh) * 2018-12-13 2019-05-24 程琳 基于深度学习的航空发动机孔探图像损伤智能识别方法
CN111598879A (zh) * 2020-05-18 2020-08-28 湖南大学 一种结构疲劳累积损伤评估的方法、系统及设备
CN112581430A (zh) * 2020-12-03 2021-03-30 厦门大学 一种基于深度学习的航空发动机无损检测方法、装置、设备及存储介质
CN112643618A (zh) * 2020-12-21 2021-04-13 东风汽车集团有限公司 一种柔性发动机仓储工装的智能调节装置及方法
CN112561892A (zh) * 2020-12-22 2021-03-26 东华大学 一种印花与提花面料的疵点检测方法
CN112529899A (zh) * 2020-12-28 2021-03-19 内蒙动力机械研究所 基于机器学习与计算机视觉固体火箭发动机无损检测方法
CN113687282A (zh) * 2021-08-20 2021-11-23 吉林建筑大学 一种磁性纳米材料的磁性检测系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909564A (zh) * 2017-10-23 2018-04-13 昆明理工大学 一种基于深度学习的全卷积网络图像裂纹检测方法
CN108074231A (zh) * 2017-12-18 2018-05-25 浙江工业大学 一种基于卷积神经网络的磁片表面缺陷检测方法
CN109800708A (zh) * 2018-12-13 2019-05-24 程琳 基于深度学习的航空发动机孔探图像损伤智能识别方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3151164A3 (en) * 2016-12-26 2017-04-12 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
CN108492281B (zh) * 2018-03-06 2021-09-21 陕西师范大学 一种基于生成式对抗网络的桥梁裂缝图像障碍物检测与去除的方法
CN108416394B (zh) * 2018-03-22 2019-09-03 河南工业大学 基于卷积神经网络的多目标检测模型构建方法
CN108562589B (zh) * 2018-03-30 2020-12-01 慧泉智能科技(苏州)有限公司 一种对磁路材料表面缺陷进行检测的方法
CN108345911B (zh) * 2018-04-16 2021-06-29 东北大学 基于卷积神经网络多级特征的钢板表面缺陷检测方法
CN108717554A (zh) * 2018-05-22 2018-10-30 复旦大学附属肿瘤医院 一种甲状腺肿瘤病理组织切片图像分类方法及其装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909564A (zh) * 2017-10-23 2018-04-13 昆明理工大学 一种基于深度学习的全卷积网络图像裂纹检测方法
CN108074231A (zh) * 2017-12-18 2018-05-25 浙江工业大学 一种基于卷积神经网络的磁片表面缺陷检测方法
CN109800708A (zh) * 2018-12-13 2019-05-24 程琳 基于深度学习的航空发动机孔探图像损伤智能识别方法

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111965183A (zh) * 2020-08-17 2020-11-20 沈阳飞机工业(集团)有限公司 基于深度学习的钛合金显微组织检测方法
CN111965183B (zh) * 2020-08-17 2023-04-18 沈阳飞机工业(集团)有限公司 基于深度学习的钛合金显微组织检测方法
CN113034599A (zh) * 2021-04-21 2021-06-25 南京航空航天大学 一种航空发动机的孔探检测装置和方法
CN113034599B (zh) * 2021-04-21 2024-04-12 南京航空航天大学 一种航空发动机的孔探检测装置和方法
CN113744230B (zh) * 2021-08-27 2023-09-05 中国民航大学 一种基于无人机视觉的飞机蒙皮损伤智能检测方法
CN113744230A (zh) * 2021-08-27 2021-12-03 中国民航大学 一种基于无人机视觉的飞机蒙皮损伤智能检测方法
CN114240948A (zh) * 2021-11-10 2022-03-25 西安交通大学 一种结构表面损伤图像的智能分割方法及系统
CN114240948B (zh) * 2021-11-10 2024-03-05 西安交通大学 一种结构表面损伤图像的智能分割方法及系统
CN114120317A (zh) * 2021-11-29 2022-03-01 哈尔滨工业大学 基于深度学习和图像处理的光学元件表面损伤识别方法
CN114120317B (zh) * 2021-11-29 2024-04-16 哈尔滨工业大学 基于深度学习和图像处理的光学元件表面损伤识别方法
CN115114860B (zh) * 2022-07-21 2024-03-01 郑州大学 一种面向混凝土管道损伤识别的数据建模扩增方法
CN115114860A (zh) * 2022-07-21 2022-09-27 郑州大学 一种面向混凝土管道损伤识别的数据建模扩增方法
CN116579135A (zh) * 2023-04-14 2023-08-11 中国航发沈阳发动机研究所 一种航空发动机红外隐身性能快速确定方法
CN116579135B (zh) * 2023-04-14 2024-06-07 中国航发沈阳发动机研究所 一种航空发动机红外隐身性能快速确定方法

Also Published As

Publication number Publication date
CN109800708A (zh) 2019-05-24

Similar Documents

Publication Publication Date Title
WO2020119103A1 (zh) 基于深度学习的航空发动机孔探图像损伤智能识别方法
CN108960135B (zh) 基于高分辨遥感图像的密集舰船目标精确检测方法
CN109118479B (zh) 基于胶囊网络的绝缘子缺陷识别定位装置及方法
CN109583489A (zh) 缺陷分类识别方法、装置、计算机设备和存储介质
CN113409314B (zh) 高空钢结构腐蚀的无人机视觉检测与评价方法及系统
CN107742099A (zh) 一种基于全卷积网络的人群密度估计、人数统计的方法
CN114092832B (zh) 一种基于并联混合卷积网络的高分辨率遥感影像分类方法
CN108537215A (zh) 一种基于图像目标检测的火焰检测方法
Xu et al. Pavement crack detection algorithm based on generative adversarial network and convolutional neural network under small samples
CN109614488B (zh) 基于文本分类和图像识别的配网带电作业条件判别方法
CN111860106B (zh) 一种无监督的桥梁裂缝识别方法
CN110751644B (zh) 道路表面裂纹检测方法
CN108416295A (zh) 一种基于局部嵌入深度特征的行人再识别方法
CN112258490A (zh) 基于光学和红外图像融合的低发射率涂层智能探损方法
CN109087305A (zh) 一种基于深度卷积神经网络的裂缝图像分割方法
CN112163450A (zh) 基于s3d学习算法的高频地波雷达船只目标检测方法
CN117011295B (zh) 基于深度可分离卷积神经网络的uhpc预制件质量检测方法
CN110909657A (zh) 一种隧道表观病害图像识别的方法
CN114359702A (zh) 一种基于Transformer的宅基地遥感图像违建识别方法及系统
CN109993742A (zh) 基于对角倒数算子的桥梁裂缝快速识别方法
CN115115608A (zh) 基于半监督语义分割的航空发动机损伤检测方法
CN115147439B (zh) 基于深度学习与注意力机制的混凝土裂缝分割方法及系统
CN114758222A (zh) 一种基于PointNet++神经网络混凝土管道损伤识别与体积量化方法
CN111598854A (zh) 基于丰富鲁棒卷积特征模型的复杂纹理小缺陷的分割方法
CN114998251A (zh) 一种基于联邦学习的空中多视觉平台地面异常检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19897295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19897295

Country of ref document: EP

Kind code of ref document: A1