CN113591992B - Hole detection intelligent detection auxiliary system and method for gas turbine engine - Google Patents

Hole detection intelligent detection auxiliary system and method for gas turbine engine Download PDF

Info

Publication number
CN113591992B
CN113591992B CN202110882652.2A CN202110882652A CN113591992B CN 113591992 B CN113591992 B CN 113591992B CN 202110882652 A CN202110882652 A CN 202110882652A CN 113591992 B CN113591992 B CN 113591992B
Authority
CN
China
Prior art keywords
detection
data
engine
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110882652.2A
Other languages
Chinese (zh)
Other versions
CN113591992A (en
Inventor
敖良忠
吴梓祺
余肖飞
马瑞阳
易相兵
朱俊名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation Flight University of China
Original Assignee
Civil Aviation Flight University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation Flight University of China filed Critical Civil Aviation Flight University of China
Priority to CN202110882652.2A priority Critical patent/CN113591992B/en
Publication of CN113591992A publication Critical patent/CN113591992A/en
Application granted granted Critical
Publication of CN113591992B publication Critical patent/CN113591992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses an auxiliary system and method for intelligent hole detection of a gas turbine engine, which relate to the technical field of engine defect identification and comprise a data set construction module, a defect detection model building and training module, a local area network building module, a server-side operation display module and a front-end UI (user interface); firstly, a data set used for training a neural network is established, an optimal engine defect detection model is trained, frame information obtained by coding a video signal of a hole detector in the front end is sent to a server end through a local area network, a hole detector video frame obtained by decoding the frame information is input into the optimal engine defect detection model, the hole detector video frame with marks is sent to the front end, and the hole detector video frame is displayed on a screen carried by the front end through a UI interface. The invention reduces the condition of missing detection caused by human, improves the detection stability and the detection precision, reduces the consumption of labor cost, realizes real-time data transmission and meets the requirement of remote multi-person cooperative work.

Description

Hole detection intelligent detection auxiliary system and method for gas turbine engine
Technical Field
The invention relates to the technical field of engine defect identification, in particular to a hole detection intelligent detection auxiliary system and method of a gas turbine engine.
Background
An aircraft engine is a highly complex and precise thermal machine, is an engine that provides the aircraft with the power required for flight, and directly affects the performance, reliability and economy of the aircraft. The main types of aircraft engines are piston aircraft engines, gas turbine engines and ramjet engines, of which the gas turbine engines are most widely used, including turbojet, turbofan, turboprop and turboshaft engines, having a compressor, a combustion chamber and a gas turbine.
The maintenance and the nursing of the airplane and technical equipment on the airplane ensure that the safety of the airplane is a precondition and a necessary condition for the use of the airplane, the maintenance of the aircraft engine is an important work for ensuring that the engine is maintained in a continuous airworthiness state, and the hole inspection is an important means for judging the internal damage condition of the engine and determining whether the engine meets the airworthiness standard. At present, whether the defects of the engine are detected or not is mainly judged by the experience of a hole detector, so that the conditions of missed detection and false detection caused by human errors can be caused, and the flight safety of an aircraft has huge hidden dangers. Thus, airline regulations require cross review of the hole detection video, which also increases labor costs; due to the fact that the internal complex structure of the aircraft engine and the damage types are various, the traditional hole detection image processing technology is difficult to meet the requirements of damage types and damage part identification in the actual engine hole detection process on the basis of identifying defect characteristics by means of expert experience.
Therefore, how to improve the stability of the hole detection, reduce the labor cost loss, and accurately identify the damage type and the damage position is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides an auxiliary system and method for hole detection intelligent detection of a gas turbine engine, which reduce the missing detection caused by human factors, improve the detection stability and detection precision, reduce the consumption of labor cost, realize real-time data transmission, and meet the requirement of remote multi-person cooperative work.
In order to achieve the above purpose, the invention provides the following technical scheme:
a gas turbine engine borescope smart detection assistance system, comprising: the system comprises a data set building module, a defect detection model building and training module, a local area network building module, a server side operation display module and a front end UI (user interface);
the data set construction module is used for sequentially carrying out data sample collection, data sample processing and data sample labeling on the pore detection data to obtain a data set for training a neural network;
the defect detection model building and training module is used for building an experimental environment platform of an engine defect detection model and training the engine defect detection model based on the data set used for training the neural network to obtain an optimal engine defect detection model;
the local area network building module is used for sending frame information of the video signal of the hole detector after being coded in the front end to the server end;
the server operation display module is used for decoding the received encoded frame information to obtain a hole detection video frame, identifying and marking defects in the hole detection video frame through the engine defect detection optimal model, and sending the hole detection video frame with the marks to the front end;
and the front-end UI is used for receiving the hole exploration video frame with the mark and displaying the hole exploration video frame on a screen carried at the front end through the UI.
The technical scheme discloses a concrete structure of the hole detection intelligent detection auxiliary system, intelligent detection of engine defects is realized through a trained optimal engine defect detection model, the condition of missing detection caused by human is avoided, and the detection stability and the detection precision are improved.
Preferably, the data set construction module comprises a data sample collection unit, a data sample processing unit and a data sample labeling unit;
the data sample collection unit collects hole detection data of an engine and transmits the hole detection data to the data sample processing unit;
the data sample processing unit is used for carrying out data noise reduction and duplicate removal and data enhancement on the hole detection data and then transmitting the hole detection data to the data sample labeling unit;
and the data sample marking unit identifies and marks the defects in the hole detection data to obtain a data set for training a neural network.
Preferably, the hole detection data is hole detection image data and/or hole detection video data.
Preferably, the data sample processing unit performs key frame extraction on the hole detection video data by using a background difference method or an inter-frame difference method, and then performs data de-noising and de-duplication. The key frames in the hole detection video data are extracted, and the problem that each frame of image cannot be stored due to a large amount of redundant background information in the collected hole detection video data can be solved.
Preferably, the defect detection model building and training module comprises a model pre-training module and a model transfer learning module;
the model pre-training module performs initial training on the model by adopting a large-scale data set to obtain a pre-training model and obtain a pre-training weight;
and the model transfer learning module sets relevant parameters of model secondary training through the pre-training weight, and obtains an optimal engine defect detection model after transfer learning training.
Preferably, the local area network building module is programmed by adopting an ZMQ library of python, is deployed at a receiving end according to a corresponding protocol and a decoding rule, receives frame information obtained by front-end coding, and sends the frame information to a server end.
A gas turbine engine bore detection intelligent detection auxiliary method comprises the following steps:
s1, sequentially carrying out data sample collection, data sample processing and data sample labeling on the pore detection data to obtain a data set for training the neural network;
s2, building an experimental environment platform of an engine defect detection model, and training the engine defect detection model based on the data set obtained in S1 to obtain an optimal engine defect detection model running at a server side;
s3, carrying out front-end coding on the video signal of the engine hole detector to obtain frame information, and sending the frame information to a server through a local area network;
s4, decoding the received frame information and then restoring to obtain a hole detection video frame by using an engine defect detection optimal model running at the server side, and identifying and marking defects in the hole detection video frame;
and S5, the server side sends the hole detection video frame with the mark to the front end, displays the hole detection video frame on a screen carried by the front end through a UI interface, intercepts the hole detection video frame with the mark on the UI interface, stores the hole detection video frame with the mark and sends the hole detection video frame to a review.
Preferably, in S2, the experimental environment platform for building the engine defect detection model includes: installing a Python interpreter Anaconda, installing a deep learning framework tensorflow, installing a CUDA acceleration library and installing an editor.
Preferably, in S3, the obtaining frame information by front-end coding the video signal of the engine hole finder includes: the method comprises the steps of coding a video signal of the engine hole detector, converting the coded video signal into stream data, and then carrying out Base coding on the stream data to obtain frame information transmitted in a local area network.
Through the technical scheme, compared with the prior art, the invention discloses and provides the auxiliary system and the method for the intelligent hole detection of the gas turbine engine, and the auxiliary system and the method have the following beneficial technical effects:
(1) according to the invention, the image defects are automatically identified by the computer to assist in manual hole detection, so that the missing detection condition caused by human factors is reduced, and the method has great advantages in the aspects of detection stability, detection precision, labor cost consumption and the like;
(2) the hole detection intelligent detection auxiliary system of the gas turbine engine can not only realize real-time data transmission, but also meet the requirement of remote cooperative work of multiple people.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a test of a bore detection intelligent test assistance system of a gas turbine engine;
FIG. 2 is a borehole data processing flow diagram;
FIG. 3 is a diagram of the YOLOv4 network architecture in one embodiment;
FIG. 4 is a flow chart for optimizing a Yolov4 network;
FIG. 5 is a diagram of an optimized YOLOv4-B network architecture;
FIG. 6 is a diagram of a LAN transmission architecture and schematic;
FIG. 7 is a block diagram of a bore detection intelligent detection assistance system for a gas turbine engine.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
FIG. 1 is a flow chart illustrating the detection of a hole detection intelligent detection auxiliary system of a gas turbine engine. Firstly, a data set for training a neural network is constructed, in the embodiment of the invention, a high-pressure turbine part of an aeroengine is taken as a research object, the defects of the high-pressure turbine are roughly divided into four categories of ablation, cracks, coating loss and material loss, and training data of a model are acquired from hole detection data of a certain overhaul factory, hole detection maintenance data of an airline company, hole detection withdrawal data of the airline company, self-construction data of a hole detector and the like.
Usually, most of collected hole detection data of the engine are video files, and as shown in fig. 2, the collected hole detection data is a processing flow chart of the hole detection data, because the original hole detection video data has the problems of strong light reflection, large hole detection video background difference and the like, the background difference method and the inter-frame difference method are adopted to extract key frames from the hole detection video data, and then data is subjected to noise reduction and duplication removal, data enhancement and defect marking. In this embodiment, the defects of the pore detection data are labeled by using Labelimg, which is an image labeling tool written by python, and the labeling result is stored as an XML file in the PASCAL VOC format.
The background difference method is to perform difference operation on a background model and a current frame to achieve the purpose of extracting a key frame, and the formula of the background difference is shown as formula (1) -formula (2).
Dk(x,y)=|fk(x,y)-fb(x,y)| (1);
Figure BDA0003192627030000061
In the formula: dk(x, y) is a background difference image; f. ofk(x, y) is the current frame to be detected; f. ofb(x, y) is the current background frame; r isk(x, y) is a binary image; t is1Is a segmentation threshold.
The principle of the interframe difference method is to select a frame with the average interframe difference strength larger than a threshold value as a key frame. The formula of the inter-frame difference is shown in the formulas (3) to (4).
Gk(x,y)=|fk(x,y)-fk-1(x,y)| (3);
Figure BDA0003192627030000062
In the formula: gk(x, y) is a background difference image; f. ofk(x, y) is the current frame to be detected;fk-1(x, y) are adjacent frames; pk(x, y) is a binary image; t is2Is a segmentation threshold.
The traditional target detection adopts a sliding window method to select a detection area, only local information can be extracted, Yolo is a representative of One-stage target detection algorithm, the algorithm idea is to treat the detection task of an object as a regression problem and perform area selection on a whole image, a classifier can extract complete information of the image, the detection of the image is more accurate, and the probability of wrong background information is reduced. Yolov4 is a target detection algorithm issued by Alexey bochkovsky in 4 months of 2020, and a Yolov4 target detection Network mainly comprises a main feature extraction Network structure, an SPP (Spatial gradient boosting, SPP) enhanced feature extraction Network structure, a PANet (Path Aggregation Network, PANet) enhanced feature extraction Network structure, and a feature prediction structure, and the Network structure is 161 layers in total, and is shown in fig. 3.
Because the Yolov4 trunk feature extraction network only comprises three effective feature layers at the deepest layer, the field of reception of the primary effective feature layers is large, which easily causes some small defect feature information loss, and the crack detection effect in the defect type of the engine is poor, so that the effective feature layers with small field of reception need to be added; meanwhile, the Yolov4 trunk feature extraction network adopts a Resblock _ body module structure based on a residual module, and effective features of a shallow layer are not fully utilized, so that the Resblock _ body module structure of the Yolov4 trunk feature extraction network is replaced by a Denseblok _ body module structure based on a Dense Connection module (DenseNet ), and the reuse of the effective features of the shallow layer is enhanced.
The Yolov4 network is optimized mainly from three aspects, as shown in fig. 4, including the following steps:
(1) optimization of the trunk feature extraction network includes: dense convolution network DenseNet establishes compact connection structure between all the front and back layers, and reuses shallow layer characteristics by overlapping the feature layers after convolution on the channel;
(2) increasing a shallow effective characteristic layer with a smaller receptive field, and increasing three effective characteristic layers in the Yolov4 to four effective characteristic layers, so as to improve the identification effect on small defects such as cracks and obtain an optimized Yolov4-B network, as shown in fig. 5; the optimized Yolov4-B network is more accurate and concentrated on the identification area of the target, and the detection effect is better;
(3) and clustering the prior frames by adopting an optimized K-means algorithm, and adjusting the total 9 prior frames of 3 characteristic layers in the Yolov4 into 8 prior frames of 4 characteristic layers, so that the detection precision and the detection speed of the model are more balanced.
Clustering is the task of dividing a data set into groups. In the engine defect data set, clustering is to group real frames of all image data, so that the real frames of the same group are approximately equal, and finally, a result of clustering represents a prior frame of a Yolo model. The idea of the K-means clustering algorithm is as follows: firstly, randomly or artificially setting K clustering centers, then reassigning each data point in the data set to the nearest clustering center, then modifying the clustering center to the average value of the assigned points as a new clustering center, and then iterating until the clustering center is not changed.
For the target detection model, because the size of the target frame is not fixed, the Euclidean distance cannot be calculated by adopting a standard K-means algorithm to measure the distance between the prior frames. Here, the distance function of the prior box is redefined through the concept of IoU, and the formula is shown as (5):
dis(box1,box2)=1-IoU(box1,box2) (5);
the method comprises the following steps: firstly, calculating the distance between any two samples according to an IoU distance formula, and calculating the absolute value of the distance average difference between any two samples and the total sample distance according to the sample mean value, as shown in formula (6); and then selecting the first K samples with the largest absolute value as the centers of the initial clusters, allocating each data point to the cluster center with the closest distance, recalculating the cluster centers, modifying the cluster centers into the average values of the allocated points, and then iterating until the cluster centers are not changed any more.
Figure BDA0003192627030000081
In the formula:
Figure BDA0003192627030000082
the IoU distance average is the average of all defect samples in the engine defect data set.
Through optimized clustering, the number of traditional prior frames is adjusted from 9 to 8 on the premise of not changing the clustering effect, and the clustering result is shown in the formula (7).
Figure BDA0003192627030000083
Then, training of the Yolov4-B network model was performed. The experimental environment platform of the engine defect detection model built in the embodiment mainly comprises the following four steps: (1) installing a Python interpreter Anaconda; (2) installing a deep learning framework tensorflow; (3) installing a CUDA acceleration library; (4) and installing an editor. The major hardware and software versions in the built deep learning environment are shown in table 1:
TABLE 1 major hardware and software versions
Figure BDA0003192627030000091
And training the model after the model is built, wherein the training process mainly comprises two parts of model pre-training and model transfer learning training. The model is pre-trained by initially training the model with the existing large-scale data set to obtain pre-training weights. Because the engine defect data set itself is small, if the detection model is trained from scratch using the hole detection image defect data set, the model parameters are randomly generated, which may result in the disadvantages of slow training speed, easy overfitting of the training result, etc.
The Yolov4 detection model and the Yolov4-B optimization model are pre-trained using a COCO dataset (COCO in Context). The COCO data set is a large-scale object detection and segmentation data set, contains 80 different classes, contains more than 20 pieces of labeled data, and is one of the most authoritative public data sets in the fields of current target identification, detection and the like. Through pre-training, the model generates a corresponding weight file, and the weights mainly comprise a plurality of general features, such as neuron parameter information of edge detection, information of weight values and bias values, and the like. And pre-training the Yolov4_ B model by using a COCO data set, and after the weight of the pre-trained model is obtained, carrying out transfer learning training.
Transfer learning is primarily to transfer the knowledge learned on task a (COCO dataset) onto task B (engine defect detection) to provide generalized performance on task B. The model that has been trained on task A, and retains the corresponding structure and weights, is referred to as the pre-trained model. The pre-training model is trained on a large data set, so that the defect that the initial weight of the model is set randomly in the transfer learning process is overcome, the overfitting degree of the model after the transfer learning is reduced, and the training time of a task B is shortened. Compared with the de novo training, the model can achieve better effect only by using little data by using the transfer learning. Therefore, in this embodiment, the model is pre-trained first, and after the pre-trained model is obtained, the relevant parameters of the model are set, and then the transfer learning training is started until the optimal model for detecting the engine defect is obtained.
The set-up of the local area network in this embodiment is programmed with the ZMQ library of python, as shown in FIG. 6. Video frame data needs to be encoded several times before being transmitted in the local area network: the collected image is firstly coded and converted into stream data, and then the stream data is coded by Base64, so that the image data collected by the raspberry pi (namely the front end) from the hole detector can be converted into data which can be transmitted in a local area network. The function of receiving video frames can be completed only by deploying according to the corresponding protocol and the decoding rule at the receiving end.
The server obtains the hole exploration video data from the raspberry sending end through the process, and after the deployed optimal engine defect detection model is identified, the hole exploration video data is displayed locally on the server, so that the purpose that different hole exploration operators work remotely and cooperatively at the raspberry sending end and the server end is achieved. If the server is deployed with an intranet penetrating configuration, a qualified-hole expert can be used for remote assistance, and further defect diagnosis can be performed on the difficult and complicated engine diseases with unsatisfactory neural network model identification and uncertain new operators.
The server sends the video frame with the mark back to the raspberry group after the identification is completed, and the video frame is displayed on a screen built by the raspberry group through a UI (user interface), wherein the UI is compiled based on a PYQT5 framework in the embodiment, and the whole UI integrates a video display function, a network transceiving function, and the functions of starting, ending and video screenshot. At the moment, the hole detection operator can intercept the identified video frame with the defects on the UI interface, and save and send the video frame to a review.
Example 2
In the embodiment, an intelligent hole detection auxiliary system of a gas turbine engine is constructed, and as shown in fig. 7, the intelligent hole detection auxiliary system comprises a data set construction module, a defect detection model building and training module, a local area network building module, a server-side operation display module and a raspberry sending-side UI interface; the data set construction module is used for sequentially carrying out data sample collection, data sample processing and data sample labeling on the pore detection data to obtain a data set used for training a neural network; the system comprises a defect detection model building and training module, a neural network training module and a neural network training module, wherein the defect detection model building and training module builds an experimental environment platform of an engine defect detection model and trains the engine defect detection model to obtain an optimal engine defect detection model; the local area network building module is used for sending frame information of the video signal of the hole detector after being coded in the raspberry group to the server side; the server operation display module is used for decoding the received encoded frame information to obtain a hole detection video frame, identifying and marking defects in the hole detection video frame through an engine defect detection optimal model, and sending the hole detection video frame with the marks to a raspberry group; and the raspberry pi end UI interface is used for receiving the hole exploration video frame with the mark and displaying the hole exploration video frame on a screen carried by the raspberry pi through the UI interface.
Further, the data set construction module comprises a data sample collection unit, a data sample processing unit and a data sample labeling unit; the data sample collection unit collects hole detection data of the engine and transmits the hole detection data to the data sample processing unit; the data sample processing unit is used for carrying out data noise reduction, duplicate removal and data enhancement on the sounding data and then transmitting the sounding data to the data sample marking unit; and the data sample marking unit identifies and marks the defects in the pore detection data to obtain a data set for training the neural network.
The data sample processing unit extracts key frames of the hole detection video data by a background difference method or an inter-frame difference method, and then performs data noise reduction and duplication removal, so that the problem can be solved.
Further, the defect detection model building and training module comprises a model pre-training module and a model transfer learning module; the model pre-training module performs initial training on the model by adopting a large-scale data set to obtain a pre-training model and obtain a pre-training weight; and the model transfer learning module sets relevant parameters of model secondary training through the pre-training weight, and obtains an optimal engine defect detection model after transfer learning training.
Whether the defects of the aircraft engine are detected or not is mainly judged by the experience of hole detection personnel, and missed detection and false detection conditions possibly occur due to human errors, so that the flight safety of the aircraft has huge hidden dangers. The traditional hole detection image processing technology is difficult to meet the requirements of identifying the damage type and the damage part in the actual hole detection process of the engine on the basis of identifying the defect characteristics by depending on expert experience. However, the invention provides the auxiliary system and the method for the hole detection intelligent detection of the gas turbine engine, the image defects are automatically identified through a computer to assist the manual hole detection, the missing detection condition caused by human factors is reduced, the detection stability, the detection precision, the labor cost consumption and other aspects are greatly superior, the real-time data transmission can be realized, and the requirement of remote multi-person cooperative work can be met. In addition, the detection precision is improved by optimizing the Yolov4 network model, and the optimized model is deployed to a server side for identifying the defects of the engine.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A gas turbine engine bore hole detection intelligent detection auxiliary system, comprising: the system comprises a data set building module, a defect detection model building and training module, a local area network building module, a server side operation display module and a front end UI (user interface);
the data set construction module is used for sequentially carrying out data sample collection, data sample processing and data sample labeling on the sounding data to obtain a data set used for training a neural network;
the defect detection model building and training module is used for building an experimental environment platform of an engine defect detection model and training the engine defect detection model based on the data set used for training the neural network to obtain an optimal engine defect detection model;
the defect detection model building and training module comprises a model pre-training module and a model transfer learning module; the model pre-training module performs initial training on the model by adopting a large-scale data set to obtain a pre-training model and obtain a pre-training weight; the model transfer learning module sets relevant parameters of model secondary training through the pre-training weight, and obtains an optimal engine defect detection model after transfer learning training;
the local area network building module is used for sending frame information of the video signal of the hole detector after being coded in the front end to the server end;
the server operation display module is used for decoding the received encoded frame information to obtain a hole detection video frame, identifying and marking defects in the hole detection video frame through the engine defect detection optimal model, and sending the hole detection video frame with the marks to the front end;
the front-end UI is used for receiving the hole detection video frame with the mark and displaying the hole detection video frame on a screen carried at the front end through the UI;
the engine defect detection model is obtained based on a Yolov4 target detection network, and the Yolov4 target detection network comprises a main feature extraction network structure, an SPP reinforced feature extraction network structure, a PANet reinforced feature extraction network structure and a feature prediction structure, wherein the total number of the main feature extraction network structure, the SPP reinforced feature extraction network structure, the PANet reinforced feature extraction network structure and the feature prediction structure is 161 layers;
the engine defect detection model is a result of optimizing a Yolov4 target detection network, and the optimization process comprises the following steps: replacing a shallow Resblock _ body module structure of the trunk feature extraction network structure with a Denseblock _ body module structure based on a dense connection module; increasing the number of three effective characteristic layers in the Yolov4 target detection network to four to obtain an optimized Yolov4-B network; clustering the prior frames by adopting an optimized K-means algorithm, and adjusting 9 prior frames in total of 3 characteristic layers in the Yolov4 target detection network into 8 prior frames of 4 characteristic layers;
the method for clustering the prior frames by adopting the optimized K-means algorithm specifically comprises the following steps: redefining the distance function of the prior box through the concept of IoU, wherein the formula is shown as (5):
dis(box1,box2)=1-IoU(box1,box2) (5);
the method comprises the following steps: firstly, calculating the distance between any two samples according to an IoU distance formula, and calculating the absolute value of the distance average difference between any two samples and the total sample distance according to the sample mean value, as shown in formula (6); then, selecting the first K samples with the largest absolute value as the centers of initial clustering, allocating each data point to the clustering center with the closest distance, recalculating the clustering centers, modifying the clustering centers into the average values of the allocated points, and then iterating until the clustering centers are not changed;
Figure FDA0003668060460000021
in the formula:
Figure FDA0003668060460000022
the IoU distance average is the average of all defect samples in the engine defect data set.
2. The gas turbine engine borescope intelligent detection assistance system of claim 1, wherein the data set construction module comprises a data sample collection unit, a data sample processing unit and a data sample labeling unit;
the data sample collection unit collects hole detection data of an engine and transmits the hole detection data to the data sample processing unit;
the data sample processing unit is used for carrying out data noise reduction and duplicate removal and data enhancement on the hole detection data and then transmitting the hole detection data to the data sample labeling unit;
and the data sample marking unit identifies and marks the defects in the hole detection data to obtain a data set for training a neural network.
3. The gas turbine engine borescope intelligent detection assistance system of claim 1, wherein the borescope data is borescope image data and/or borescope video data.
4. The gas turbine engine hole detection intelligent detection auxiliary system of claim 3, wherein the data sample processing unit performs data de-noising and de-duplication after performing key frame extraction on the hole detection video data by a background difference method or an inter-frame difference method.
5. The intelligent hole detection auxiliary system of the gas turbine engine as claimed in claim 1, wherein the local area network building module is programmed by using an ZMQ library of python, deployed at a receiving end according to a corresponding protocol and a decoding rule, receives frame information obtained by front-end encoding, and sends the frame information to a server end.
6. A gas turbine engine bore detection intelligent detection auxiliary method is characterized by comprising the following steps:
s1, sequentially carrying out data sample collection, data sample processing and data sample labeling on the sounding data to obtain a data set for training a neural network;
s2, building an experimental environment platform of an engine defect detection model, and training the engine defect detection model based on the data set obtained in S1 to obtain an optimal engine defect detection model running at a server side;
s3, carrying out front-end coding on the video signal of the engine hole detector to obtain frame information, and sending the frame information to a server through a local area network;
s4, decoding the received frame information and then restoring to obtain a hole detection video frame by using an engine defect detection optimal model running at the server side, and identifying and marking defects in the hole detection video frame;
and S5, the server side sends the hole detection video frame with the mark to the front end, displays the hole detection video frame on a screen carried by the front end through a UI interface, intercepts the hole detection video frame with the mark on the UI interface, stores the hole detection video frame with the mark and sends the hole detection video frame to a review.
7. The method as claimed in claim 6, wherein in S2, a test environment platform of an engine defect detection model is built, including: installing a Python interpreter Anaconda, installing a deep learning framework tensorflow, installing a CUDA acceleration library and installing an editor.
8. The intelligent hole detection auxiliary method for a gas turbine engine according to claim 6, wherein in S3, the front-end encoding a video signal of an engine hole detector to obtain frame information comprises: the method comprises the steps of coding a video signal of the engine hole detector, converting the coded video signal into stream data, and then carrying out Base coding on the stream data to obtain frame information transmitted in a local area network.
CN202110882652.2A 2021-08-02 2021-08-02 Hole detection intelligent detection auxiliary system and method for gas turbine engine Active CN113591992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110882652.2A CN113591992B (en) 2021-08-02 2021-08-02 Hole detection intelligent detection auxiliary system and method for gas turbine engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110882652.2A CN113591992B (en) 2021-08-02 2021-08-02 Hole detection intelligent detection auxiliary system and method for gas turbine engine

Publications (2)

Publication Number Publication Date
CN113591992A CN113591992A (en) 2021-11-02
CN113591992B true CN113591992B (en) 2022-07-01

Family

ID=78253953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110882652.2A Active CN113591992B (en) 2021-08-02 2021-08-02 Hole detection intelligent detection auxiliary system and method for gas turbine engine

Country Status (1)

Country Link
CN (1) CN113591992B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685152A (en) * 2018-12-29 2019-04-26 北京化工大学 A kind of image object detection method based on DC-SPP-YOLO
CN111178392A (en) * 2019-12-10 2020-05-19 中国民航大学 Aero-engine hole-exploring image damage segmentation method based on deep neural network
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112836668A (en) * 2021-02-22 2021-05-25 集美大学 Ship target detection method, terminal device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619351B (en) * 2019-08-14 2022-04-08 浙江工业大学 Vegetable and bird stager site selection method based on improved k-means algorithm
CN111476449B (en) * 2019-10-09 2022-05-24 北京交通大学 Subway station operation time interval division method based on improved k-means clustering algorithm
CN111797887A (en) * 2020-04-16 2020-10-20 中国电力科学研究院有限公司 Anti-electricity-stealing early warning method and system based on density screening and K-means clustering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685152A (en) * 2018-12-29 2019-04-26 北京化工大学 A kind of image object detection method based on DC-SPP-YOLO
CN111178392A (en) * 2019-12-10 2020-05-19 中国民航大学 Aero-engine hole-exploring image damage segmentation method based on deep neural network
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112581430A (en) * 2020-12-03 2021-03-30 厦门大学 Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium
CN112836668A (en) * 2021-02-22 2021-05-25 集美大学 Ship target detection method, terminal device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
REFINING YOLOV4 FOR VEHICLE DETECTION;Pooja Mahto et al.;《International Journal of Advanced Research in Engineering and Technology (IJARET)》;20200531;第11卷(第5期);第409-419页 *
深度学习及其在航空发动机缺陷检测中的应用研究;旷可嘉;《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》;20180515(第05期);第C031-107页 *

Also Published As

Publication number Publication date
CN113591992A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN108428229B (en) Lung texture recognition method based on appearance and geometric features extracted by deep neural network
CN108765404B (en) A kind of road damage testing method and device based on deep learning image classification
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN111696075A (en) Intelligent fan blade defect detection method based on double-spectrum image
CN110264440B (en) Large-scale train displacement fault detection method and system based on deep learning
CN109902573A (en) Multiple-camera towards video monitoring under mine is without mark pedestrian's recognition methods again
CN109741328A (en) A kind of automobile apparent mass detection method based on production confrontation network
CN111431986A (en) Industrial intelligent quality inspection system based on 5G and AI cloud edge cooperation
CN102053563A (en) Flight training data acquisition and quality evaluation system of analog machine
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN112069940A (en) Cross-domain pedestrian re-identification method based on staged feature learning
CN110647830A (en) Bearing fault diagnosis method based on convolutional neural network and Gaussian mixture model
CN110348505B (en) Vehicle color classification model training method and device and vehicle color identification method
CN109946304A (en) Surface defects of parts on-line detecting system and detection method based on characteristic matching
CN110790101A (en) Elevator trapping false alarm identification method based on big data analysis
CN109753853A (en) One kind being completed at the same time pedestrian detection and pedestrian knows method for distinguishing again
CN115774851B (en) Method and system for detecting internal defects of crankshaft based on hierarchical knowledge distillation
CN111105398A (en) Transmission line component crack detection method based on visible light image data
CN110110586A (en) The method and device of remote sensing airport Airplane detection based on deep learning
CN113962308A (en) Aviation equipment fault prediction method
CN115830407A (en) Cable pipeline fault discrimination algorithm based on YOLOV4 target detection model
CN115019294A (en) Pointer instrument reading identification method and system
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN113591992B (en) Hole detection intelligent detection auxiliary system and method for gas turbine engine
CN110598747A (en) Road classification method based on self-adaptive K-means clustering algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant