CN113706496B - Aircraft structure crack detection method based on deep learning model - Google Patents

Aircraft structure crack detection method based on deep learning model Download PDF

Info

Publication number
CN113706496B
CN113706496B CN202110970084.1A CN202110970084A CN113706496B CN 113706496 B CN113706496 B CN 113706496B CN 202110970084 A CN202110970084 A CN 202110970084A CN 113706496 B CN113706496 B CN 113706496B
Authority
CN
China
Prior art keywords
crack
feature map
frame
deep learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110970084.1A
Other languages
Chinese (zh)
Other versions
CN113706496A (en
Inventor
吕帅帅
王彬文
杨宇
王叶子
李嘉欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIC Aircraft Strength Research Institute
Original Assignee
AVIC Aircraft Strength Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIC Aircraft Strength Research Institute filed Critical AVIC Aircraft Strength Research Institute
Priority to CN202110970084.1A priority Critical patent/CN113706496B/en
Publication of CN113706496A publication Critical patent/CN113706496A/en
Application granted granted Critical
Publication of CN113706496B publication Critical patent/CN113706496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the field of structural health monitoring, and particularly relates to an aircraft structural crack detection method based on a deep learning model. Comprising the following steps: step one, constructing a deep learning model comprises the following steps: the suspected crack feature extraction module is used for extracting a feature map containing a suspected crack region from the image to be detected and acquiring coordinate information of the suspected crack region; the contrast characteristic extraction module is used for extracting a characteristic diagram of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region; the crack judging module is used for comparing the feature image output by the suspected crack feature extraction module with the feature image output by the comparison feature extraction module to judge whether a suspected crack area has cracks or not; step two, training a deep learning model; and thirdly, detecting cracks of the aircraft structure. According to the method and the device, the influence of interference factors on crack detection accuracy can be reduced, and the accurate, rapid and real-time identification and early warning of the fatigue cracks of the aircraft structure are realized.

Description

Aircraft structure crack detection method based on deep learning model
Technical Field
The application belongs to the field of structural health monitoring, and particularly relates to an aircraft structural crack detection method based on a deep learning model.
Background
Metal cracking is a common form of damage to aerospace structures. The damage can be timely found and early warned in the fatigue test process of the aviation structure, weak links of the aviation structure design can be exposed, the strength and the integrity of the supporting structure are evaluated, and meanwhile, a basis is provided for writing of an aviation structure maintenance manual. At present, the detection means of cracks in the full-size fatigue test of an airplane mainly comprise visual inspection, vortex, ultrasound and the like, and the methods have strong dependence on expert experience, and have the problems of high labor cost, long time consumption, low detection reliability and the like due to the problems of complex test environment, high detection risk in the test loading process, difficult operation in a limited space and the like. Therefore, realizing the automation and the intelligent high-reliability detection of the aviation structure cracks is a problem which needs to be solved at present in the full-size fatigue test of the aircraft.
With the deep development of robot technology and artificial intelligence technology in more than ten years, and the application of the technology in the civil field, machine vision provides a new solution for crack automatic detection in aircraft fatigue test. The high-definition image of the monitored part is obtained through a high-precision motion system (such as a crawling robot and a mechanical arm) and an industrial camera, and then the target detection algorithm is applied to automatically identify cracks and execute damage early warning, so that the adverse effects of manpower on the aspects of cost, instantaneity, danger and the like can be greatly reduced.
Deep learning target detection algorithms, represented by Faster-regional convolutional neural networks (fast-region Convolutional Neural Network, abbreviated as fast-RCNN), have been widely used for object recognition at present because of their advantages of rapidness and high accuracy. However, in the fatigue test of the aircraft structure, because of the complexity of the environment, the interference of high similarity of the characteristics of the defects such as dirt, scratches and the like and the cracks is very easy to occur in the detection area, so that high misjudgment rate exists when the existing target detection algorithm is directly used for crack detection, and the fatigue test progress is further influenced.
It is therefore desirable to have a solution that overcomes or at least alleviates at least one of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The object of the present application is to provide a method for detecting cracks in an aircraft structure based on a deep learning model, so as to solve at least one problem existing in the prior art.
The technical scheme of the application is as follows:
an aircraft structure crack detection method based on a deep learning model, comprising:
step one, constructing a deep learning model, wherein the deep learning model comprises the following steps:
the suspected crack feature extraction module is used for extracting a feature map containing a suspected crack region from the image to be detected and acquiring coordinate information of the suspected crack region;
the contrast characteristic extraction module is used for extracting a characteristic diagram of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region;
the crack judging module is used for comparing the feature image output by the suspected crack feature extraction module with the feature image output by the comparison feature extraction module to judge whether a suspected crack area has cracks or not;
step two, training a deep learning model;
and thirdly, performing crack detection on the aircraft structure through the deep learning model.
In at least one embodiment of the present application, the suspected crack feature extraction module includes an image input unit to be detected, a monitoring area calibration network unit, a basic feature extraction network unit, a suspected crack feature map unit, an area suggestion network unit, and a suspected crack recommendation frame unit, where,
the image input unit to be detected is used for inputting an image, wherein,
in the second step, when training the deep learning model, the image input unit to be detected is used for inputting an image with cracks;
in the third step, when the crack detection of the aircraft structure is performed through the deep learning model, the image input unit to be detected is used for inputting an image to be detected;
the monitoring area calibration network unit is used for calibrating the monitoring area of the image input by the image input unit to be detected;
the basic feature extraction network unit is used for extracting basic features from the monitoring area;
the suspected crack characteristic map unit is used for extracting a characteristic map of a monitoring area with basic characteristics;
the regional suggestion network unit is used for acquiring coordinate information of a monitoring region with basic characteristics;
the suspected crack recommending frame unit is used for generating a recommending frame characteristic diagram according to the characteristic diagram of the monitoring area with the basic characteristic.
In at least one embodiment of the present application, the contrast feature extraction module includes a template image input unit, a monitoring area calibration network unit, a basic feature extraction network unit, a contrast feature map unit, and a suspected crack contrast frame unit, wherein,
the template image input unit is used for inputting a crack-free template image, wherein,
in the second step, when training the deep learning model, the template image input unit is used for inputting two crack-free template images;
in the third step, when the crack detection of the aircraft structure is performed through the deep learning model, the template image input unit is used for inputting a crack-free template image;
the monitoring area calibration network unit is used for calibrating the monitoring area of the template image input by the template image input unit;
the basic feature extraction network unit is used for extracting basic features from the monitoring area;
the contrast characteristic map unit is used for extracting a characteristic map of a monitoring area with basic characteristics;
the suspected crack comparison frame unit is used for extracting a corresponding feature map comprising the monitoring area with the basic features according to the coordinate information output by the area suggestion network unit, and generating a comparison frame feature map according to the corresponding feature map comprising the monitoring area with the basic features.
In at least one embodiment of the present application, in the step two, when the deep learning model training is performed, the contrast box feature map generated in the suspected crack contrast box unit includes a first contrast box feature map generated based on a first template image and a second contrast box feature map generated based on a second template image.
In at least one embodiment of the present application, the crack determination module includes a recommended frame pooling network element, a data combining network element, and a classification network element, wherein,
the recommending frame pooling network unit is used for pooling the recommending frame characteristic diagram and the comparison frame characteristic diagram;
the data combination network unit is used for rearranging and combining the recommended frame feature images and the comparison frame feature images according to the crack positions;
the classification network unit is used for screening the recommended frame characteristic diagrams with cracks from the rearranged and combined recommended frame characteristic diagrams and the comparison frame characteristic diagrams.
In at least one embodiment of the present application, the recommended frame feature map and the comparative frame feature map after the pooling process of the recommended frame pooling network element have the same size.
In at least one embodiment of the present application, in the second step, when training the deep learning model, the data combining network unit rearranges and combines the recommended frame feature map and the contrast frame feature map according to the crack position specifically:
and combining the recommended frame feature map, the first contrast frame feature map and the second contrast frame feature map of the same region into a triplet.
In at least one embodiment of the present application, in the second step, when training the deep learning model, the classifying network unit screens out the recommended frame feature map with the crack from the rearranged recommended frame feature map and the contrast frame feature map specifically includes:
further extracting features from the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 3 feature vectors in each triplet, specifically: respectively splicing the feature vectors of the recommended frame feature map and the first contrast frame feature map, splicing the feature vectors of the first contrast frame feature map and the second contrast frame feature map, and obtaining two 256-dimensional spliced vectors by each triplet;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
In at least one embodiment of the present application, in step three, when the aircraft structural crack is detected by the deep learning model, the rearranging and combining the recommended frame feature map and the contrast frame feature map according to the crack position in the data combination network unit is specifically:
and combining the recommended frame feature images and the contrast frame feature images of the same area into a binary group.
In at least one embodiment of the present application, in step three, when the aircraft structural crack is detected by the deep learning model, the filtering the cracked recommended frame feature map from the rearranged recommended frame feature map and the compared frame feature map in the classification network unit specifically includes:
the binary set is further extracted with features through a deep learning network model, and each feature map is converted into a 128-dimensional feature vector after feature normalization;
splicing 2 feature vectors in each binary group, specifically: splicing the feature vectors of the recommended frame feature images and the feature vectors of the contrast frame feature images, and obtaining a 256-dimensional spliced vector for each binary group;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
The invention has at least the following beneficial technical effects:
according to the aircraft structure crack detection method based on the deep learning model, influence of interference factors such as scratches and stains on crack detection accuracy can be reduced, and accurate, rapid and real-time identification and early warning of fatigue cracks of the aircraft structure are realized.
Drawings
FIG. 1 is an overall architecture of an aircraft structural crack detection method based on a deep learning model according to one embodiment of the present application;
FIG. 2 is a workflow diagram of a data combining network element of one embodiment of the present application;
FIG. 3 is a schematic diagram of a classified network element structure according to one embodiment of the present application;
fig. 4 is a schematic diagram of feature stitching of triads by a classification network element according to an embodiment of the present application.
Wherein:
1-a suspected crack feature extraction module; 2-a contrast feature extraction module; 3-a crack determination module; 4-an image input unit to be detected; 5-calibrating a network unit in a monitoring area; 6-a basic feature extraction network element; 7-a suspected crack signature unit; 8-area proposal network element; 9-a suspected crack recommendation frame unit; 10-a template image input unit; 11-comparing the feature map units; 12-a suspected crack contrast frame unit; 13-pooling network elements with recommendation frames; 14-a data combining network element; 15-classifying network elements.
Detailed Description
In order to make the purposes, technical solutions and advantages of the implementation of the present application more clear, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the accompanying drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are some, but not all, of the embodiments of the present application. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application. Embodiments of the present application are described in detail below with reference to the accompanying drawings.
In the description of the present application, it should be understood that the terms "center," "longitudinal," "lateral," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, merely to facilitate description of the present application and simplify the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the scope of protection of the present application.
The present application is described in further detail below with reference to fig. 1-4.
The application provides an aircraft structure crack detection method based on a deep learning model, which comprises the following steps: the device comprises a suspected crack characteristic extraction module 1, a comparison characteristic extraction module 2 and a crack judgment module 3.
As shown in fig. 1, a suspected crack feature extraction module 1 is configured to extract a feature map including a suspected crack region from an image to be detected, and obtain coordinate information of the suspected crack region; the contrast characteristic extraction module 2 is used for extracting characteristic diagrams of corresponding areas from two template images without cracks according to the coordinate information of the suspected crack areas; the crack determination module 3 is configured to compare the feature map output by the suspected crack feature extraction module 1 with the feature map output by the comparison feature extraction module 2, and determine whether a suspected crack region has a crack.
In a preferred embodiment of the present application, the suspected crack feature extraction module 1 comprises an image input unit 4 to be detected, a monitoring area calibration network unit 5, a basic feature extraction network unit 6, a suspected crack feature map unit 7, an area recommendation network unit (Region Proposal Network, RPN) 8 and a suspected crack recommendation frame unit 9, wherein,
the image input unit to be detected 4 is for inputting an image, wherein,
in the second step, when training the deep learning model, the image input unit 4 to be detected is used for inputting an image with a crack;
in the third step, when the crack detection of the aircraft structure is performed through the deep learning model, the to-be-detected image input unit 4 is used for inputting the to-be-detected image;
the monitoring area calibration network unit 5 is used for realizing the calibration of the monitoring area of the image input by the image input unit 4 to be detected;
the basic feature extraction network unit 6 is used for extracting basic features from the monitoring area;
the suspected crack characteristic map unit 7 is used for extracting a characteristic map of a monitoring area with basic characteristics;
the area suggestion network unit 8 is used for acquiring coordinate information of a monitoring area with basic characteristics;
the suspected crack recommendation frame unit 9 is configured to generate a recommendation frame feature map according to a feature map including a monitoring area having basic features.
In this embodiment, the suspected crack feature map unit 7 can extract a plurality of feature maps including the monitoring region having the basic feature, and a plurality of recommended frame feature maps are generated in the suspected crack recommended frame unit 9.
In a preferred embodiment of the present application, the contrast feature extraction module 2 comprises a template image input unit 10, a monitoring area calibration network unit 5, a basic feature extraction network unit 6, a contrast feature map unit 11 and a suspected crack contrast frame unit 12, wherein,
the template image input unit 10 is for inputting a crack-free template image, wherein,
in the second step, when training the deep learning model, the template image input unit 10 is used for inputting two crack-free template images;
in the third step, when the crack detection of the aircraft structure is performed by the deep learning model, the template image input unit 10 is used for inputting a crack-free template image;
the monitoring area calibration network unit 5 is used for calibrating the monitoring area of the template image input by the template image input unit 10;
the basic feature extraction network unit 6 is used for extracting basic features from the monitoring area;
the contrast feature map unit 11 is configured to extract a feature map including a monitoring area having basic features;
the suspected crack contrast frame unit 12 is configured to extract a feature map corresponding to the monitored area including the basic feature according to the coordinate information output by the area suggestion network unit 8, and generate a contrast frame feature map according to the feature map corresponding to the monitored area including the basic feature.
In this embodiment, the contrast feature map unit 11 extracts a plurality of feature maps including a monitoring region having basic features on each template image; in the second step, when the deep learning model training is performed, the contrast frame feature map generated in the suspected crack contrast frame unit 12 includes a plurality of first contrast frame feature maps generated based on the first template image and a plurality of second contrast frame feature maps generated based on the second template image.
Advantageously, in the present embodiment, the suspected crack feature extraction module 1 and the comparative feature extraction module 2 share a monitoring area calibration network unit 5 and a basic feature extraction network unit 6.
In a preferred embodiment of the present application, the crack determination module 3 comprises a recommended frame pooling network element 13, a data combining network element 14 and a classification network element 15, wherein,
the recommending frame pooling network unit 13 is used for pooling the recommending frame characteristic diagram and the comparing frame characteristic diagram;
the data combining network unit 14 is configured to rearrange and combine the recommended frame feature map and the contrast frame feature map according to the crack positions;
the classification network unit 15 is configured to screen out the recommended frame feature map with the crack from the rearranged recommended frame feature map and the contrast frame feature map.
In this embodiment, the recommended frame feature map and the comparison frame feature map after the pooling processing by the recommended frame pooling network unit 13 have the same size.
As shown in fig. 2, in the second step, when training the deep learning model, the data combining network unit 14 rearranges and combines the recommended frame feature map and the contrast frame feature map according to the crack positions specifically as follows:
and combining the recommended frame feature map, the first contrast frame feature map and the second contrast frame feature map of the same region into a triplet.
As shown in fig. 3 to 4, in the second step, when training the deep learning model, the classification network unit 15 screens out the recommended frame feature map with cracks from the rearranged and combined recommended frame feature map and the comparison frame feature map specifically includes:
further extracting features from the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 3 feature vectors in each triplet, specifically: respectively splicing the feature vectors of the recommended frame feature map and the first contrast frame feature map, splicing the feature vectors of the first contrast frame feature map and the second contrast frame feature map, and obtaining two 256-dimensional spliced vectors by each triplet;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
In the method for detecting the structural crack of the aircraft based on the deep learning model, in the third step, when the structural crack of the aircraft is detected by the deep learning model, the data combination network unit 14 rearranges and combines the recommended frame feature map and the contrast frame feature map according to the crack position specifically:
and combining the recommended frame feature images and the contrast frame feature images of the same area into a binary group.
In the third step, when the aircraft structural crack is detected by the deep learning model, the classification network unit 15 screens out the recommended frame feature map with the crack from the rearranged and combined recommended frame feature map and the comparison frame feature map, which specifically includes:
the binary set is further extracted with features through a deep learning network model, and each feature map is converted into a 128-dimensional feature vector after feature normalization;
splicing 2 feature vectors in each binary group, specifically: splicing the feature vectors of the recommended frame feature images and the feature vectors of the contrast frame feature images, and obtaining a 256-dimensional spliced vector for each binary group;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
According to the aircraft structure crack detection method based on the deep learning model, a suspected crack screening method in the fast-RCNN is utilized, and a comparison mechanism is introduced in a crack judging stage according to the actual problem of crack detection in a fatigue test. In the model, the suspected crack feature extraction module 1, the comparison feature extraction module 2 and the recommended frame pooling network unit 13 extend the network architecture and design method of the corresponding modules in the fast-RCNN.
According to the aircraft structure crack detection method based on the deep learning model, after the model is initially built, the aircraft structure crack detection method is divided into two stages of model training and model prediction. In the model training stage, the model optimizes parameters of the deep learning model through learning of known crack images and crack images, and the model works in a triplet mode at the moment; in the model prediction stage, the deep learning model judges whether cracks exist in the real-time acquisition picture or not by comparing the characteristics of the real-time acquisition picture (possibly with cracks) and the template picture (without cracks), and the model works under a binary model.
The working principle of the present application will be described in order of model training and model prediction.
In the model training stage, the input images in the image input unit 4 to be detected are known images with cracks, the input images in the template image input unit 10 are two images with the same field of view as the input images in the image input unit 4 to be detected, but different illumination and definition and no cracks (named 10-1 and 10-2), the model extracts the recommended frame feature images and the corresponding contrast frame feature images (divided into 12-1 and 12-2) of the input images through the basic feature extraction network unit 6, the suspected crack feature image unit 7, the regional suggestion network unit 8, the suspected crack recommendation frame unit 9, the contrast feature image unit 11 and the suspected crack contrast frame unit 12, and then the recommended frame and the contrast frame have the same size through the processing of the recommended frame pooling network unit 13, and form a triad queue in the array combination network unit 14, and then enter the classification network unit 15 to train parameters of the triad network unit 15.
The training process of the model is divided into two stages of training the suspected crack feature extraction module 1 and training the crack judgment module 3, namely inputting a batch of pictures, firstly training network parameters in the suspected crack feature extraction module 1, and then taking the output of the suspected crack feature extraction module 1 as input to train the network parameters in the crack judgment module 3; the next batch of pictures is then trained similarly until the network converges. The loss function and training method of the suspected crack feature extraction module 1 are the same as those of the RPN network of Faster-RCNN. The Loss function loss_total of the crack determination module 3 can be expressed as:
Loss_total=Triplet_Loss+Cross_Entropy_Loss+Regeression_Loss (1)
where triple_loss is a Triplet Loss, and can be expressed specifically as:
where N represents the total number of triplets entered into the sorting network unit 15, f () represents 128-dimensional vectors of the recommended frame feature map and the contrast frame feature map extracted by the sorting network unit 15, respectively represent the ith threeA recommended frame feature map, a first contrast frame feature map, and a second contrast frame feature map for the tuple.
The function of the loss function is to reduce Euclidean distance between the feature vectors of the two comparison frames as much as possible, and to increase Euclidean distance between the feature vectors of the comparison frames and the recommendation frames as much as possible, so that 128-dimensional feature vectors extracted by the model can represent image differences caused by cracks and are not easily influenced by factors such as light rays, definition and the like.
Cross_Entropy_Loss in equation (1) is the Cross Entropy Loss, and can be expressed specifically as:
where N represents the total number of triplets input to the sorting network unit 15, g () represents the sorting result output by the sorting network unit 15, g (). Epsilon.0, 1](0 means no crack, 1 means crack),and respectively representing 256-dimensional spliced vectors which are spliced by the recommended frame and the comparison frame 12-1 and the comparison frame 12-2 in the ith triplet.
The function of the loss function is to optimize the network parameters of the classification layer (fig. 3) with the goal of maximizing the inter-class distance of 256-dimensional features.
Reggerision_loss in equation (1) is the regression Loss of crack location.
In the model prediction stage, the input image in the image input unit 4 to be detected is an image to be detected acquired in real time in the fatigue test, and the input image in the template image input unit 10 is a crack-free image acquired in the early stage of the test and the field of view of the image to be detected, which is the same as that of the image to be detected input in the image input unit 4 to be detected. The model firstly extracts the recommended frame feature map and the corresponding contrast frame feature map, and then obtains the binary group consisting of the feature maps of the recommended frame pooling network unit 13. Finally, two 128-dimensional feature vectors of the binary group are extracted from the classification network unit 15 and spliced into 256-dimensional spliced vectors, and the 256-dimensional spliced vectors are sent to a classification layer to obtain a prediction result of the model.
The aircraft structure crack detection method based on the deep learning model realizes four aspects of image acquisition method design, data processing flow design, classification network structure design and loss function design of a classification network, and is specifically as follows:
1. based on the structure and design method of the RPN network in the Faster-RCNN, designing two parallel networks to extract images of the same detection position at different moments, and intercepting a recommended frame characteristic diagram and a comparison frame characteristic diagram of a suspected crack region;
2. reordering all the recommended frame feature images and the contrast frame feature images in the batch processing data according to the positions;
3. when the deep learning model is trained, 128-dimensional feature vectors of each recommended frame feature map and each contrast frame feature map are sequentially extracted according to the triplet sequence, feature splicing is carried out according to the modes of the recommended frame, the contrast frames and the contrast frames 1 and 2, and the feature splicing is sent to a classification network unit to perform classification;
4. and designing a loss function of the classification network by considering factors such as the influence of image quality on the target classification result, network convergence speed and the like.
According to the aircraft structure crack detection method based on the deep learning model, firstly, compared with a commonly used target detection algorithm in a large-view-field target detection task, the method of the application introduces a comparison mechanism into the deep learning model according to the actual problem of crack detection in a fatigue test, solves the influence of interference factors such as scratches, stains and the like on the detection accuracy, and provides a basis for realizing real-time and reliable damage early warning in the aircraft structure fatigue test; secondly, the directional characteristic of crack change is considered in the design of a comparison mechanism, and triplet loss is introduced into a common classification loss function, so that the model can distinguish the crack characteristic from the characteristic of light and definition change.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. An aircraft structure crack detection method based on a deep learning model is characterized by comprising the following steps:
step one, constructing a deep learning model, wherein the deep learning model comprises the following steps:
the suspected crack feature extraction module (1) is used for extracting a feature map containing a suspected crack region from an image to be detected and acquiring coordinate information of the suspected crack region;
the contrast characteristic extraction module (2) is used for extracting a characteristic diagram of a corresponding region from the template image without the crack according to the coordinate information of the suspected crack region;
the crack judging module (3) is used for comparing the feature image output by the suspected crack feature extracting module (1) with the feature image output by the comparison feature extracting module (2) to judge whether a suspected crack area has cracks or not;
step two, training a deep learning model;
thirdly, performing aircraft structure crack detection through the deep learning model;
the suspected crack characteristic extraction module (1) comprises an image input unit (4) to be detected, a monitoring area calibration network unit (5), a basic characteristic extraction network unit (6), a suspected crack characteristic diagram unit (7), an area suggestion network unit (8) and a suspected crack recommendation frame unit (9), wherein,
the image input unit (4) to be detected is used for inputting images, wherein,
in the second step, when training the deep learning model, the image input unit (4) to be detected is used for inputting an image with cracks;
in the third step, when the crack detection of the aircraft structure is performed through the deep learning model, the image input unit (4) to be detected is used for inputting an image to be detected;
the monitoring area calibration network unit (5) is used for calibrating a monitoring area of the image input by the image input unit (4) to be detected;
the basic feature extraction network unit (6) is used for extracting basic features from a monitoring area;
the suspected crack characteristic map unit (7) is used for extracting a characteristic map of a monitoring area with basic characteristics;
the region proposal network unit (8) is used for acquiring coordinate information of a monitoring region with basic characteristics;
the suspected crack recommending frame unit (9) is used for generating a recommending frame characteristic diagram according to the characteristic diagram of the monitoring area with basic characteristics;
the contrast characteristic extraction module (2) comprises a template image input unit (10), a monitoring area calibration network unit (5), a basic characteristic extraction network unit (6), a contrast characteristic image unit (11) and a suspected crack contrast frame unit (12), wherein,
the template image input unit (10) is used for inputting a crack-free template image, wherein,
in the second step, when training the deep learning model, the template image input unit (10) is used for inputting two crack-free template images;
in the third step, when the crack detection of the aircraft structure is performed through the deep learning model, the template image input unit (10) is used for inputting a crack-free template image;
the monitoring area calibration network unit (5) is used for calibrating a monitoring area of the template image input by the template image input unit (10);
the basic feature extraction network unit (6) is used for extracting basic features from a monitoring area;
the contrast feature map unit (11) is used for extracting a feature map of a monitoring area with basic features;
the suspected crack comparison frame unit (12) is used for extracting a corresponding feature map comprising the monitoring area with the basic features according to the coordinate information output by the area suggestion network unit (8), and generating a comparison frame feature map according to the corresponding feature map comprising the monitoring area with the basic features.
2. The method for detecting the structural crack of the aircraft based on the deep learning model according to claim 1, wherein in the step two, when the deep learning model training is performed, the contrast frame feature map generated in the suspected crack contrast frame unit (12) comprises a first contrast frame feature map generated based on a first template image and a second contrast frame feature map generated based on a second template image.
3. The deep learning model-based aircraft structure crack detection method according to claim 2, characterized in that the crack determination module (3) comprises a recommended frame pooling network element (13), a data combining network element (14) and a classification network element (15), wherein,
the recommendation frame pooling network unit (13) is used for pooling the recommendation frame characteristic diagram and the comparison frame characteristic diagram;
the data combination network unit (14) is used for rearranging and combining the recommended frame characteristic diagram and the comparison frame characteristic diagram according to the crack positions;
the classification network unit (15) is used for screening the recommended frame characteristic diagram with cracks from the rearranged and combined recommended frame characteristic diagram and the comparison frame characteristic diagram.
4. A deep learning model based aircraft structural crack detection method according to claim 3, characterized in that the recommended frame feature map and the comparative frame feature map after pooling by the recommended frame pooling network unit (13) have the same size.
5. The method for detecting cracks in an aircraft structure based on a deep learning model according to claim 4, wherein in the second step, when training the deep learning model, the data combination network unit (14) rearranges and combines the recommended frame feature map and the contrast frame feature map according to the crack positions specifically includes:
and combining the recommended frame feature map, the first contrast frame feature map and the second contrast frame feature map of the same region into a triplet.
6. The method for detecting cracks in an aircraft structure based on a deep learning model according to claim 5, wherein in the second step, when training the deep learning model, the classification network unit (15) screens out the recommended frame feature map with cracks from the rearranged recommended frame feature map and the comparative frame feature map specifically includes:
further extracting features from the triples through a deep learning network model, and converting each feature map into a 128-dimensional feature vector after feature normalization;
splicing 3 feature vectors in each triplet, specifically: respectively splicing the feature vectors of the recommended frame feature map and the first contrast frame feature map, splicing the feature vectors of the first contrast frame feature map and the second contrast frame feature map, and obtaining two 256-dimensional spliced vectors by each triplet;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
7. The method for detecting the structural crack of the aircraft based on the deep learning model according to claim 6, wherein in the third step, when the structural crack of the aircraft is detected by the deep learning model, the data combination network unit (14) rearranges and combines the recommended frame feature map and the contrast frame feature map according to the crack position specifically:
and combining the recommended frame feature images and the contrast frame feature images of the same area into a binary group.
8. The method for detecting the structural crack of the aircraft based on the deep learning model according to claim 7, wherein in the third step, when the structural crack of the aircraft is detected by the deep learning model, the classification network unit (15) screens out the recommended frame feature map with the crack from the rearranged and combined recommended frame feature map and the comparison frame feature map specifically includes:
the binary set is further extracted with features through a deep learning network model, and each feature map is converted into a 128-dimensional feature vector after feature normalization;
splicing 2 feature vectors in each binary group, specifically: splicing the feature vectors of the recommended frame feature images and the feature vectors of the contrast frame feature images, and obtaining a 256-dimensional spliced vector for each binary group;
and sending the spliced vectors into a classification layer for classification, screening out 128-dimensional feature vectors of the recommended frame feature map with the cracks as classification results, and sending the 128-dimensional feature vectors of the recommended frame feature map with the cracks into a regression layer for predicting crack positions.
CN202110970084.1A 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model Active CN113706496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110970084.1A CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110970084.1A CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Publications (2)

Publication Number Publication Date
CN113706496A CN113706496A (en) 2021-11-26
CN113706496B true CN113706496B (en) 2024-04-12

Family

ID=78654184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110970084.1A Active CN113706496B (en) 2021-08-23 2021-08-23 Aircraft structure crack detection method based on deep learning model

Country Status (1)

Country Link
CN (1) CN113706496B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115077832B (en) * 2022-07-28 2022-11-08 西安交通大学 Method for measuring vibration fatigue damage of three-dimensional surface of high-temperature-resistant component of airplane
CN116309557B (en) * 2023-05-16 2023-08-01 山东聚宁机械有限公司 Method for detecting fracture of track shoe of excavator

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009019A (en) * 2019-03-26 2019-07-12 苏州富莱智能科技有限公司 Magnetic material crackle intelligent checking system and method
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
KR102157610B1 (en) * 2019-10-29 2020-09-18 세종대학교산학협력단 System and method for automatically detecting structural damage by generating super resolution digital images
JP6807092B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method
JP6807093B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN110009019A (en) * 2019-03-26 2019-07-12 苏州富莱智能科技有限公司 Magnetic material crackle intelligent checking system and method
KR102157610B1 (en) * 2019-10-29 2020-09-18 세종대학교산학협력단 System and method for automatically detecting structural damage by generating super resolution digital images
JP6807092B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method
JP6807093B1 (en) * 2020-09-24 2021-01-06 株式会社センシンロボティクス Inspection system and management server, program, crack information provision method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的图像识别算法研究;衣世东;;网络安全技术与应用(第01期);全文 *

Also Published As

Publication number Publication date
CN113706496A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110310261B (en) Contact net dropper defect detection model training method and defect detection method
CN109166094B (en) Insulator fault positioning and identifying method based on deep learning
CN111815564B (en) Method and device for detecting silk ingots and silk ingot sorting system
CN111325713A (en) Wood defect detection method, system and storage medium based on neural network
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN109461149A (en) The intelligent checking system and method for lacquered surface defect
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN110070008A (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN107966454A (en) A kind of end plug defect detecting device and detection method based on FPGA
CN111126325A (en) Intelligent personnel security identification statistical method based on video
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN109840900A (en) A kind of line detection system for failure and detection method applied to intelligence manufacture workshop
CN108680833B (en) Composite insulator defect detection system based on unmanned aerial vehicle
CN112200045A (en) Remote sensing image target detection model establishing method based on context enhancement and application
CN102073846A (en) Method for acquiring traffic information based on aerial images
CN117677969A (en) Defect detection method and device
CN111161237A (en) Fruit and vegetable surface quality detection method, storage medium and sorting device thereof
CN113469950A (en) Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN112258490A (en) Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion
CN110009601A (en) Large-Scale Equipment irregular contour detection method of surface flaw based on HOG
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN112927223A (en) Glass curtain wall detection method based on infrared thermal imager
CN115035082A (en) YOLOv4 improved algorithm-based aircraft transparency defect detection method
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
JP6596260B2 (en) Teaching support method and image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant