CN116168010A - Deep learning-based component damage detection method and system - Google Patents

Deep learning-based component damage detection method and system Download PDF

Info

Publication number
CN116168010A
CN116168010A CN202310239705.8A CN202310239705A CN116168010A CN 116168010 A CN116168010 A CN 116168010A CN 202310239705 A CN202310239705 A CN 202310239705A CN 116168010 A CN116168010 A CN 116168010A
Authority
CN
China
Prior art keywords
damage
aerial vehicle
unmanned aerial
aircraft component
aircraft
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310239705.8A
Other languages
Chinese (zh)
Inventor
魏永超
邓春艳
敖良忠
张娅岚
夏桂书
刘家伟
莫杜衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation Flight University of China
Original Assignee
Civil Aviation Flight University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation Flight University of China filed Critical Civil Aviation Flight University of China
Priority to CN202310239705.8A priority Critical patent/CN116168010A/en
Publication of CN116168010A publication Critical patent/CN116168010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a member damage detection method and system based on deep learning, which relate to the technical field of damage detection and comprise the following steps: collecting image data of an aircraft component by using an unmanned plane carrying a camera as initial data; preprocessing initial data to obtain a training data set; building a deep convolutional neural network, inputting a training data set into the deep convolutional neural network for model training until the model converges, and obtaining an optimal aircraft component damage detection model; and identifying the damage of the aircraft component by using the optimal aircraft component damage detection model, and determining the damage type and the damage position. According to the technical scheme, the damage of the aircraft structure can be rapidly and finely detected, the damage type and the damage position are confirmed, the damage detection precision is improved, the intellectualization of the damage detection is realized, and the flight safety of the aircraft is ensured.

Description

Deep learning-based component damage detection method and system
Technical Field
The invention relates to the technical field of damage detection, in particular to a method and a system for detecting member damage based on deep learning.
Background
Aircraft is one of the most important scientific and technological achievements obtained by human beings in the beginning of the 20 th century, is increasingly an indispensable transportation means of modern civilization, deeply changes and influences the life of people, and meanwhile, the aircraft is an important weapon in modern military, can be used for reconnaissance and bombing, and is excellent in early warning, anti-diving, mine sweeping and the like.
In the use process of the aircraft, the aircraft structure is often damaged due to overload, incorrect manipulation or improper maintenance and the like, such as cracks, deformation and collision or burn and the like of the aircraft structure, and the damage reduces the strength and the rigidity of the aircraft structure and influences the aerodynamic performance of the aircraft. If damage cannot be effectively prevented, faults can be checked and removed, the aircraft can be buried with great hidden danger for flight safety, so that the damage of the aircraft structure must be repaired in time to ensure that the aircraft is in a good use state.
The structural damage detection is to detect and identify the damage degree of the damaged aircraft, and can provide basis for making repair schemes and implementing repair. At present, a common inspection method for aircraft structural damage is visual inspection, wherein a human eye is used for directly observing the surface of a detected object, or optical equipment such as an endoscope, a mirror and the like is used for indirectly observing the surface, for example, the invention patent with publication number of CN114851603A is an external field aircraft cockpit cover organic glass broken hole damage repair method, a hole damage area is determined through visual inspection and a silver mark detector, and a special mosaic patch and patch reinforcement patch adhesive joint is designed to realize external field quick and effective repair on the organic glass broken hole damage, so that the cost of aircraft maintenance is reduced, and the repair quality is ensured.
The visual detection is the most widely applied technology in the structural detection, has the advantages of low threshold, easy operation, low cost and the like, but is closely related to subjective factors such as the capability, experience and the like of technicians, is extremely easy to be influenced by objective conditions such as standing, light, temperature and the like, and has obvious defects in the detection probability and the detection precision of small-size damage; moreover, aircraft structural damage detection not only focuses on the presence or absence of damage, but also requires accurate measurement of the size, location and type of damage. Therefore, how to overcome the defects of the traditional manual detection mode, confirm the damage type and the damage position of the component, improve the damage detection precision, and ensure the flight safety of the aircraft is a technical problem which needs to be solved by the technicians in the field.
Disclosure of Invention
In view of the above, the invention provides a method and a system for detecting damage to a component based on deep learning, which can rapidly and finely detect damage to an aircraft structure, confirm the type and the position of the damage, improve the precision of damage detection, realize the intellectualization of damage detection and ensure the flight safety of the aircraft.
In order to achieve the above object, the present invention provides the following technical solutions:
a member damage detection method based on deep learning comprises the following steps:
collecting image data of an aircraft component by using an unmanned plane carrying a camera as initial data;
preprocessing initial data to obtain a training data set;
building a deep convolutional neural network, inputting a training data set into the deep convolutional neural network for model training until the model converges, and obtaining an optimal aircraft component damage detection model;
and identifying the damage of the aircraft component by using the optimal aircraft component damage detection model, and determining the damage type and the damage position.
The technical effect that above-mentioned technical scheme reaches is: by utilizing the image training model, the damage of the aircraft component can be detected, the damage type and the damage position can be determined, and the intellectualization of damage detection can be realized.
Optionally, the unmanned aerial vehicle includes: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
The technical effect that above-mentioned technical scheme reaches is: the unmanned aerial vehicle can replace manual work to operate, is efficient and convenient in working mode, and can complete tasks in complex environments.
Optionally, acquiring image data of the aircraft component specifically includes the steps of:
presetting a task target point and planning an initial track without an obstacle;
controlling the unmanned aerial vehicle to perform trial flight shooting around the aircraft according to the initial track, and collecting an initial image;
according to different aircraft body structures, the effect of initial images shot by test flight and obstacle information passed by an initial track, determining an optimal detour path, detour height and scanning area of the unmanned aerial vehicle, and determining important target shooting points;
controlling the unmanned aerial vehicle to fly according to the optimal winding detection path at the determined winding detection height, and controlling the unmanned aerial vehicle to hover and enabling the camera to aim at the target for shooting according to the determined important target shooting point;
and acquiring image data of the airplane components under various shooting angles, various airplane models and various environments by using the unmanned aerial vehicle-mounted camera.
The technical effect that above-mentioned technical scheme reaches is: the unmanned aerial vehicle is planned to the route of examining around the aircraft, can avoid the barrier, gathers more clear more complete aircraft component image, and then utilizes the dataset of constructing to train the model, obtains the damage detection model that the detection precision is higher.
Optionally, the preprocessing of the initial data specifically includes the following steps:
taking the collected aircraft component images of each frame as a sample, and forming a first data set by all aircraft component images;
screening original images of the aircraft components capable of identifying damage from the aircraft component images of the first data set to form a second data set;
marking the damage type and the damage position of the screened original image of the aircraft component, and forming a third data set by marked samples; types of damage include deformation, scoring, corrosion, rivet damage, and paint drop;
and cutting all the marked areas on each aircraft component image in the third data set, and sequentially rotating, overturning and randomly adjusting the exposure degree of the cut images to obtain a training data set.
The technical effect that above-mentioned technical scheme reaches is: the method comprises the steps of obtaining a training data set, screening and labeling images which are clear and recognizable to damage, denoising, enhancing, expanding the data set and the like, and performing model training by using the preprocessed images to obtain a damage detection model with higher precision.
Optionally, the built deep convolutional neural network adopts a U-Net semantic segmentation network architecture, and comprises an encoder, a decoder and jump connection;
the encoder adopts a ResNet-50 neural network architecture and is used for encoding multi-level semantic features of an input image; the decoder predicts the damage type of each pixel in the input image based on the multi-level semantic features of the encoder and the jump connection transfer image;
the model training method specifically comprises the following steps:
inputting the training data set into an encoder of a U-Net semantic segmentation network, and gradually extracting characteristic information of an aircraft component image by adopting a plurality of continuous downsampling by using convolution operation in the encoder;
the feature information is subjected to a plurality of upsampling through a decoder of a U-Net semantic segmentation network so as to fuse shallow features and deep features and restore image information;
and (3) performing repeated iterative updating on the model weight by adopting a gradient descent method until the loss function converges, and completing training and optimization of the model to obtain an optimal aircraft component damage detection model.
Optionally, the built deep convolutional neural network includes: the system comprises a trunk feature extraction network, a reinforced feature extraction network and a prediction network, wherein the structures are sequentially connected;
the trunk feature extraction network is a VoVNet 2-39 network and is used for extracting the preliminary features of the aircraft component images;
the enhanced feature extraction network comprises an SPP module and a PANet module, and is used for fusing all the primary features and extracting effective features;
the prediction network comprises a YOLO Head module, which is used for obtaining damage information in an aircraft component image through effective characteristics;
the model training method specifically comprises the following steps:
predicting damage information of the aircraft component images in the training set by using the built initial damage detection model to obtain component prediction damage information;
calculating a loss function value according to the predicted damage information of the component and the damage information marked in the training data set;
judging whether the loss function value meets the preset requirement, if not, updating the network weight of each layer in the initial damage detection model according to the loss function value, and re-predicting the damage information of the component; if yes, finishing training to obtain an optimal aircraft component damage detection model.
Optionally, a binary cross entropy function is used as a loss function for the training process, wherein the binary cross entropy function is:
Figure BDA0004123623460000051
wherein L is loss Representing the loss value, N representing the total number of pixels of an aircraft component image, L i Label paper representing the i-th pixel point, y i Representing the predicted probability value for the i-th pixel.
The invention also discloses a member damage detection system based on deep learning, which comprises: the device comprises an acquisition module, a preprocessing module, a construction module, a training module and a detection module;
the acquisition module is used for acquiring image data of the aircraft component through the unmanned aerial vehicle carrying the camera and taking the image data as initial data;
the preprocessing module is used for preprocessing the initial data to obtain a training data set;
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting the training data set into the deep convolutional neural network to perform model training until the model converges to obtain an optimal aircraft component damage detection model;
the detection module is used for identifying the damage of the aircraft component through the optimal aircraft component damage detection model and determining the damage type and the damage position.
Optionally, the unmanned aerial vehicle includes: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
Compared with the prior art, the invention discloses a member damage detection method and system based on deep learning, which has the following beneficial effects:
(1) Aiming at the problems of the traditional component damage detection method, the invention relates the image information and the damage information, builds a deep convolutional neural network and trains the deep convolutional neural network, realizes the judgment of the damage type and the damage position of the aircraft component, improves the damage detection precision and ensures the flight safety of the aircraft;
(2) According to the invention, the unmanned aerial vehicle is used for carrying a camera to acquire an image of an aircraft component, an optimal surrounding detection path and an image acquisition mode of the unmanned aerial vehicle are searched, obstacles can be avoided, and clearer and more complete image data are acquired; preprocessing initial data, marking damage types and damage positions, and performing model training on a deep convolution network by using the obtained training data set to obtain a damage detection model with higher precision, thereby improving the accuracy of damage detection of an aircraft component;
(3) The invention provides an initial structure of a damage detection model and a specific model training mode, model training is carried out through a constructed training data set, an optimal aircraft component damage detection model is obtained when the model converges, and the improved model structure improves the detection precision and is also suitable for terminals with poor computing capability to use.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a deep learning-based component damage detection method provided by the invention;
fig. 2 is a block diagram of the deep learning-based component damage detection system provided by the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Common structural damage on an aircraft is crack, layering, corrosion, pit hole, scratch and the like, and how to detect the damage by a reliable and accurate inspection method has important significance for ensuring the safe operation of the aircraft. A typical visual inspection is a visual inspection of the interior, exterior areas, devices or accessories for obvious damage, failure or anomalies over an arm length, which may require removal or opening of access covers or access hatches under normal light conditions, or under direct light of a certain intensity as desired. However, the resolution of human eyes is low, good illumination conditions are required, and it is also difficult to ensure the accuracy of detection.
Therefore, the embodiment of the invention discloses a member damage detection method based on deep learning, which comprises the following steps as shown in fig. 1:
collecting image data of an aircraft component by using an unmanned plane carrying a camera as initial data;
preprocessing initial data to obtain a training data set;
building a deep convolutional neural network, inputting a training data set into the deep convolutional neural network for model training until the model converges, and obtaining an optimal aircraft component damage detection model;
and identifying the damage of the aircraft component by using the optimal aircraft component damage detection model, and determining the damage type and the damage position.
The unmanned plane is a unmanned plane mainly controlled by radio remote control or self program, and has the advantages of small specific volume, low cost, convenient use and the like, and the special shooting angle of the unmanned plane is an angle which can not be reached by manual detection. The unmanned aerial vehicle is used as a novel remote sensing monitoring platform, has high intelligent degree of flight operation, can independently fly and pick up according to a preset route, provides remote sensing monitoring data and low-altitude video monitoring in real time, and has the characteristics of strong maneuverability, convenience, low cost and the like.
Therefore, in order to acquire image data of an aircraft component, the unmanned aerial vehicle employed in the present embodiment includes: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
Specifically, the ground base station comprises a base station body, wherein a GPS module, a data transmission module, a 4G module and a router module are arranged in the base station body; the GPS module can position the base station and calibrate the coordinates of the unmanned aerial vehicle in a differential GPS mode; the data transmission module can be used for carrying out communication between the base station and the unmanned aerial vehicle and between the ground station and the unmanned aerial vehicle, and transmitting various information in real time; the 4G module can receive the 4G network and acquire the regional map; the router module can generate a WIFI signal to be connected with the ground station.
Further, the method for acquiring the image data of the aircraft component specifically comprises the following steps:
presetting a task target point and planning an initial track without an obstacle;
controlling the unmanned aerial vehicle to perform trial flight shooting around the aircraft according to the initial track, and collecting an initial image;
according to different aircraft body structures, the effect of initial images shot by test flight and obstacle information passed by an initial track, determining an optimal detour path, detour height and scanning area of the unmanned aerial vehicle, and determining important target shooting points;
controlling the unmanned aerial vehicle to fly according to the optimal winding detection path at the determined winding detection height, and controlling the unmanned aerial vehicle to hover and enabling the camera to aim at the target for shooting according to the determined important target shooting point;
and acquiring image data of the airplane components under various shooting angles, various airplane models and various environments by using the unmanned aerial vehicle-mounted camera.
The flight path planning of the unmanned aerial vehicle is to search an optimal flight path from a starting point to a target point, which meets the constraint of the maneuvering performance and environmental information of the unmanned aerial vehicle under a specific constraint condition, and is a technical guarantee for realizing autonomous flight of the unmanned aerial vehicle. In the embodiment, the routing path of the unmanned aerial vehicle is planned according to a specific task, and based on the routing path, the component images are acquired based on the cameras carried by the unmanned aerial vehicle, so that clearer and more complete aircraft image data can be acquired.
Further, the initial data is preprocessed, specifically including the following steps:
taking the collected aircraft component images of each frame as a sample, and forming a first data set by all aircraft component images;
screening original images of the aircraft components capable of identifying damage from the aircraft component images of the first data set to form a second data set;
marking the damage type and the damage position of the screened original image of the aircraft component, and forming a third data set by marked samples; types of damage include deformation, scoring, corrosion, rivet damage, and paint drop;
and cutting all the marked areas on each aircraft component image in the third data set, and sequentially rotating, overturning and randomly adjusting the exposure degree of the cut images to obtain a training data set.
The method comprises the steps of screening high-quality aircraft component images capable of identifying damage from initial data, marking the type and the position of the damage, denoising, image enhancement, normalization and other preprocessing operations, obtaining aircraft component images with more complete data, carrying out model training on a deep convolutional neural network model by using a constructed training data set, obtaining a higher damage detection model, and further improving the accuracy of aircraft component damage detection.
Further, in one embodiment, the constructed deep convolutional neural network adopts a U-Net semantic segmentation network architecture, comprising an encoder, a decoder and jump connection;
the encoder adopts a ResNet-50 neural network architecture and is used for encoding multi-level semantic features of an input image; the decoder predicts the damage type of each pixel in the input image based on the multi-level semantic features of the encoder and the jump connection transfer image;
the model training method specifically comprises the following steps:
inputting the training data set into an encoder of a U-Net semantic segmentation network, and gradually extracting characteristic information of an aircraft component image by adopting a plurality of continuous downsampling by using convolution operation in the encoder;
the feature information is subjected to a plurality of upsampling through a decoder of a U-Net semantic segmentation network so as to fuse shallow features and deep features and restore image information;
and (3) performing repeated iterative updating on the model weight by adopting a gradient descent method until the loss function converges, and completing training and optimization of the model to obtain an optimal aircraft component damage detection model.
By the method, various typical damage categories can be automatically identified from the aircraft component image, field investigation of staff is not needed, and damage identification is accurate, efficient and objective; and the association relation between the component image and the damage judgment is established, so that the evaluation precision of damage identification can be improved, and the accuracy and the rationality of damage identification are improved.
Further, in another embodiment, the constructed deep convolutional neural network includes: the system comprises a trunk feature extraction network, a reinforced feature extraction network and a prediction network, wherein the structures are sequentially connected;
the trunk feature extraction network is a VoVNet 2-39 network and is used for extracting the preliminary features of the aircraft component images;
the enhanced feature extraction network comprises an SPP module and a PANet module, and is used for fusing all the primary features and extracting effective features;
the prediction network comprises a YOLO Head module, which is used for obtaining damage information in an aircraft component image through effective characteristics;
the model training method specifically comprises the following steps:
predicting damage information of the aircraft component images in the training set by using the built initial damage detection model to obtain component prediction damage information;
calculating a loss function value according to the predicted damage information of the component and the damage information marked in the training data set;
judging whether the loss function value meets the preset requirement, if not, updating the network weight of each layer in the initial damage detection model according to the loss function value, and re-predicting the damage information of the component; if yes, finishing training to obtain an optimal aircraft component damage detection model.
The scheme discloses another model building and training mode, improves based on the YOLOv4 model, builds an aircraft component damage detection model, replaces a trunk feature extraction network CSPDarknet53 structure with a VoVNEtv2-39 structure, has fewer parameter amounts and shorter top-down feature transmission path, can improve detection precision, and realizes automatic identification and positioning of component damage.
Further, a binary cross entropy function is used as a loss function for the training process, wherein the binary cross entropy function is:
Figure BDA0004123623460000111
wherein L is loss Representing the loss value, N representing the total number of pixels of an aircraft component image, L i Label paper representing the i-th pixel point, y i Representing the predicted probability value for the i-th pixel.
Corresponding to the method shown in fig. 1, the embodiment of the present invention further provides a deep learning-based component damage detection system, which is used for implementing the method in fig. 1, where the deep learning-based component damage detection system provided by the embodiment of the present invention may be applied to a computer terminal or various mobile devices, and the structural schematic diagram of the deep learning-based component damage detection system is shown in fig. 2, and specifically includes: the device comprises an acquisition module, a preprocessing module, a construction module, a training module and a detection module;
the acquisition module is used for acquiring image data of the aircraft component through the unmanned aerial vehicle carrying the camera and taking the image data as initial data;
the preprocessing module is used for preprocessing the initial data to obtain a training data set;
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting the training data set into the deep convolutional neural network to perform model training until the model converges to obtain an optimal aircraft component damage detection model;
the detection module is used for identifying the damage of the aircraft component through the optimal aircraft component damage detection model and determining the damage type and the damage position.
Further, the unmanned aerial vehicle includes: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
Aiming at the problems of the traditional component damage detection method, the invention relates the image information and the damage information, builds a deep convolutional neural network and trains the deep convolutional neural network, realizes the judgment of the damage type and the damage position of the aircraft component, improves the damage detection precision and ensures the flight safety of the aircraft; the unmanned aerial vehicle is used for carrying a camera to acquire an aircraft component image, an optimal surrounding detection path and an image acquisition mode of the unmanned aerial vehicle are found, obstacles can be avoided, and clearer and more complete image data are obtained; the initial data is preprocessed, the damage type and the damage position are marked, the model training is carried out on the deep convolution network by utilizing the obtained training data set, a damage detection model with higher precision can be obtained, and the accuracy of the damage detection of the aircraft component is further improved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The member damage detection method based on deep learning is characterized by comprising the following steps of:
collecting image data of an aircraft component by using an unmanned plane carrying a camera as initial data;
preprocessing initial data to obtain a training data set;
building a deep convolutional neural network, inputting a training data set into the deep convolutional neural network for model training until the model converges, and obtaining an optimal aircraft component damage detection model;
and identifying the damage of the aircraft component by using the optimal aircraft component damage detection model, and determining the damage type and the damage position.
2. The deep learning-based component damage detection method of claim 1, wherein the unmanned aerial vehicle comprises: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
3. The method for detecting damage to components based on deep learning of claim 1, wherein the step of acquiring image data of aircraft components comprises the steps of:
presetting a task target point and planning an initial track without an obstacle;
controlling the unmanned aerial vehicle to perform trial flight shooting around the aircraft according to the initial track, and collecting an initial image;
according to different aircraft body structures, the effect of initial images shot by test flight and obstacle information passed by an initial track, determining an optimal detour path, detour height and scanning area of the unmanned aerial vehicle, and determining important target shooting points;
controlling the unmanned aerial vehicle to fly according to the optimal winding detection path at the determined winding detection height, and controlling the unmanned aerial vehicle to hover and enabling the camera to aim at the target for shooting according to the determined important target shooting point;
and acquiring image data of the airplane components under various shooting angles, various airplane models and various environments by using the unmanned aerial vehicle-mounted camera.
4. The deep learning-based component damage detection method of claim 1, wherein the preprocessing of the initial data specifically comprises the following steps:
taking the collected aircraft component images of each frame as a sample, and forming a first data set by all aircraft component images;
screening original images of the aircraft components capable of identifying damage from the aircraft component images of the first data set to form a second data set;
marking the damage type and the damage position of the screened original image of the aircraft component, and forming a third data set by marked samples; types of damage include deformation, scoring, corrosion, rivet damage, and paint drop;
and cutting all the marked areas on each aircraft component image in the third data set, and sequentially rotating, overturning and randomly adjusting the exposure degree of the cut images to obtain a training data set.
5. The deep learning-based component damage detection method of claim 1, wherein the constructed deep convolutional neural network adopts a U-Net semantic segmentation network architecture, and comprises an encoder, a decoder and jump connection;
the encoder adopts a ResNet-50 neural network architecture and is used for encoding multi-level semantic features of an input image; the decoder predicts the damage type of each pixel in the input image based on the multi-level semantic features of the encoder and the jump connection transfer image;
the model training method specifically comprises the following steps:
inputting the training data set into an encoder of a U-Net semantic segmentation network, and gradually extracting characteristic information of an aircraft component image by adopting a plurality of continuous downsampling by using convolution operation in the encoder;
the feature information is subjected to a plurality of upsampling through a decoder of a U-Net semantic segmentation network so as to fuse shallow features and deep features and restore image information;
and (3) performing repeated iterative updating on the model weight by adopting a gradient descent method until the loss function converges, and completing training and optimization of the model to obtain an optimal aircraft component damage detection model.
6. The deep learning-based component damage detection method of claim 1, wherein the constructed deep convolutional neural network comprises: the system comprises a trunk feature extraction network, a reinforced feature extraction network and a prediction network, wherein the structures are sequentially connected;
the trunk feature extraction network is a VoVNet 2-39 network and is used for extracting the preliminary features of the aircraft component images;
the enhanced feature extraction network comprises an SPP module and a PANet module, and is used for fusing all the primary features and extracting effective features;
the prediction network comprises a YOLO Head module, which is used for obtaining damage information in an aircraft component image through effective characteristics;
the model training method specifically comprises the following steps:
predicting damage information of the aircraft component images in the training set by using the built initial damage detection model to obtain component prediction damage information;
calculating a loss function value according to the predicted damage information of the component and the damage information marked in the training data set;
judging whether the loss function value meets the preset requirement, if not, updating the network weight of each layer in the initial damage detection model according to the loss function value, and re-predicting the damage information of the component; if yes, finishing training to obtain an optimal aircraft component damage detection model.
7. The method for detecting damage to a member based on deep learning as claimed in claim 5, wherein,
a binary cross entropy function is used as a loss function for the training process, wherein the binary cross entropy function is:
Figure FDA0004123623450000031
wherein L is loss Representing loss valueN represents the total number of pixels of an aircraft component image, L i Label paper representing the i-th pixel point, y i Representing the predicted probability value for the i-th pixel.
8. A deep learning-based component damage detection system, comprising: the device comprises an acquisition module, a preprocessing module, a construction module, a training module and a detection module;
the acquisition module is used for acquiring image data of the aircraft component through the unmanned aerial vehicle carrying the camera and taking the image data as initial data;
the preprocessing module is used for preprocessing the initial data to obtain a training data set;
the construction module is used for constructing a deep convolutional neural network;
the training module is used for inputting the training data set into the deep convolutional neural network to perform model training until the model converges to obtain an optimal aircraft component damage detection model;
the detection module is used for identifying the damage of the aircraft component through the optimal aircraft component damage detection model and determining the damage type and the damage position.
9. The deep learning based component damage detection system of claim 8, wherein the unmanned aerial vehicle comprises: the unmanned aerial vehicle comprises a machine body, an unmanned aerial vehicle rotor wing, a flight controller, a camera and a wireless transmitter;
unmanned aerial vehicle rotors for realizing unmanned aerial vehicle flight are symmetrically arranged on two sides of the machine body;
a flight controller for controlling the unmanned aerial vehicle rotor wing is arranged in the machine body;
the camera is arranged on the machine body through a two-dimensional rotating cradle head which consists of a horizontal turntable and a pitching turntable and is arranged at intervals;
the wireless transmitter is arranged in the machine body and is in communication connection with the ground base station to control communication and image data transmission.
CN202310239705.8A 2023-03-14 2023-03-14 Deep learning-based component damage detection method and system Pending CN116168010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310239705.8A CN116168010A (en) 2023-03-14 2023-03-14 Deep learning-based component damage detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310239705.8A CN116168010A (en) 2023-03-14 2023-03-14 Deep learning-based component damage detection method and system

Publications (1)

Publication Number Publication Date
CN116168010A true CN116168010A (en) 2023-05-26

Family

ID=86418315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310239705.8A Pending CN116168010A (en) 2023-03-14 2023-03-14 Deep learning-based component damage detection method and system

Country Status (1)

Country Link
CN (1) CN116168010A (en)

Similar Documents

Publication Publication Date Title
Jiang et al. Real‐time crack assessment using deep neural networks with wall‐climbing unmanned aerial system
Li et al. Automatic bridge crack detection using Unmanned aerial vehicle and Faster R-CNN
ES2910700T3 (en) Automatic surface inspection system and procedure
CN108037770B (en) Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence
CN112884931B (en) Unmanned aerial vehicle inspection method and system for transformer substation
CN112098326B (en) Automatic detection method and system for bridge diseases
CN105302151B (en) A kind of system and method for aircraft docking guiding and plane type recognition
CN110703800A (en) Unmanned aerial vehicle-based intelligent identification method and system for electric power facilities
CN104808685A (en) Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN110046584B (en) Road crack detection device and detection method based on unmanned aerial vehicle inspection
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
CN111123964B (en) Unmanned aerial vehicle landing method and device and computer readable medium
CN113298035A (en) Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition
CN110781757A (en) Airport pavement foreign matter identification and positioning method and system
CN109901623B (en) Method for planning inspection route of pier body of bridge
CN109902610A (en) Traffic sign recognition method and device
CN109829908A (en) Atural object safe distance detection method and equipment below power line based on binocular image
JP2023514156A (en) Vehicle supply chain damage tracking system
CN114564042A (en) Unmanned aerial vehicle landing method based on multi-sensor fusion
WO2022247597A1 (en) Papi flight inspection method and system based on unmanned aerial vehicle
CN116740833A (en) Line inspection and card punching method based on unmanned aerial vehicle
CN117406789A (en) Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis
CN116978139A (en) Unmanned aerial vehicle intelligent inspection system and method based on Beidou technology
CN116168010A (en) Deep learning-based component damage detection method and system
CN116243725A (en) Substation unmanned aerial vehicle inspection method and system based on visual navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination