CN114119586A - Intelligent detection method for aircraft skin defects based on machine vision - Google Patents
Intelligent detection method for aircraft skin defects based on machine vision Download PDFInfo
- Publication number
- CN114119586A CN114119586A CN202111457448.2A CN202111457448A CN114119586A CN 114119586 A CN114119586 A CN 114119586A CN 202111457448 A CN202111457448 A CN 202111457448A CN 114119586 A CN114119586 A CN 114119586A
- Authority
- CN
- China
- Prior art keywords
- image
- rain
- corner
- feature
- snow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 33
- 230000007547 defect Effects 0.000 title claims abstract description 27
- 238000001914 filtration Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000011176 pooling Methods 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 31
- 238000012549 training Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000010606 normalization Methods 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 230000007246 mechanism Effects 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 230000008447 perception Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000013508 migration Methods 0.000 claims description 3
- 230000005012 migration Effects 0.000 claims description 3
- 238000005065 mining Methods 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an intelligent detection method for aircraft skin defects based on machine vision. And inputting the infrared picture and the RGB image shot by the camera into a guide filtering module and a rain and snow image discriminator module to obtain a rain and snow image and a non-rain and snow image. And inputting the rain and snow images into a pre-trained rain and snow removing network to obtain rain and snow removing images. The influence of rain and snow environment is reduced, and the gray level image and the RGB image are subjected to image fusion, so that the influence caused by strong light reflection is reduced. And inputting the image data obtained through preprocessing into an hourglass convolution neural network for feature extraction. The feature map processed and output by the backbone network is input to a global channel attention network to carry out channel dimension weighting on the features in the gray level image and the RGB image, and the reflection influence is reduced. And inputting the feature map into a corner prediction module, a feature adjustment module and a centripetal offset module, predicting the offset of each corner and the central point, and performing corner matching according to the predicted corner and the centripetal offset to finally obtain the target frame.
Description
Technical Field
The invention belongs to the field of deep neural networks and image filtering, and particularly relates to an intelligent detection method for aircraft skin defects based on machine vision.
Background
In recent years, with the rapid development of deep learning, methods of deep learning have gradually penetrated into various fields. And for aircraft skin detection, the deep learning technology has quite obvious advantages, labor force is saved, the maintenance time is shortened, the detection efficiency and the accuracy of a detection result can be improved, and influences of light factors and severe weather on the aircraft skin detection work can be reduced.
At present, the research on the aircraft skin detection at home and abroad mainly takes a research method of a convenient robot represented by a wall-climbing robot as a main part. The research mainly solves the problem of aircraft skin detection by using a deep learning target detection algorithm. Aircraft skin data is collected from the network and the data volume is increased by mapping, and the final data is taken as a training sample. The model that will improve the network training is arranged to the robot in, and the ground service personnel in airport can reduce the intensity of manual work as an auxiliary means that the aircraft overhauld through the covering problem that the robot detected, can reduce the emergence of the condition of examining and examining simultaneously, the flight safety of guarantee aircraft.
Disclosure of Invention
The invention aims to provide an intelligent detection method for the defects of an aircraft skin based on machine vision, so as to solve the technical problems.
In order to solve the technical problems, the specific technical scheme of the intelligent detection method for the defects of the aircraft skin based on the machine vision is as follows:
an aircraft skin defect intelligent detection method based on machine vision comprises the following steps:
step 1: the method comprises the following steps that aircraft ground service personnel use a common camera and an infrared camera to shoot aircraft skin, the shot RGB images and infrared images are uploaded to a back-end database, a high-quality aircraft skin image data set is obtained through screening, operations of zooming, rotating, cutting, mirroring and perspective change are carried out, and data enhancement is carried out on a defect sample in a map pasting mode;
step 2: respectively inputting the infrared image data and the RGB image data into a guide filtering module to respectively obtain corresponding low-frequency and high-frequency images, inputting the high-frequency images of the two data into a rain and snow image distinguishing module, and directly carrying out image fusion on the infrared image and the RGB image when the output of the module is not a rain and snow image, and forming a new database with the image obtained in the step 3; when the module outputs rain and snow images, inputting high and low frequency images of the infrared image and the RGB image into a rain and snow removing network;
and step 3: the rain and snow removing network extracts the characteristics of the high-frequency image and the low-frequency image, finally combines the output low-frequency image without rain and snow with the high-frequency image to obtain a restored image without rain and snow, and performs image fusion on the infrared image and the RGB image;
and 4, step 4: inputting the image data obtained through preprocessing into an hourglass convolution neural network for re-estimating the posture and extracting the characteristics;
and 5: inputting the feature graph processed and output by the backbone network into a relation perception global channel attention network to weight the channel dimensions of the visible light feature and the infrared feature in the image;
step 6: inputting the feature map output by the channel attention network into a corner point prediction and feature adjustment module, and finally integrating the information of the object into a left upper corner point or a right lower corner point;
and 7: inputting the predicted corner points and the adaptive features obtained in the step 5 into a centripetal deviation module, predicting the deviation amount of each corner point and the central point, and performing corner point matching according to the predicted corner points and the centripetal deviation amount;
and 8: and training an aircraft skin detection system.
Further, the guiding filtering module and the rain and snow image distinguishing module in the step 2 are:
step 2.1: the guiding filtering module: the rain image is decomposed by a guiding filter, and the principle is as follows:
the rain image I is guided and filtered to obtain a low-frequency background image ILAnd high frequency image IH:
I=IH+IL (2-1)
After obtaining the low-frequency image, subtracting the low-frequency image from the original image to obtain a high-frequency image:
IH=I-IL (2-2)
high frequency image IHPassing through the guide filter again to obtain a new low-frequency background image IHLAnd high frequency image IHH:
IH=IHL+IHH (2-3)
Then subtracting the image passing through the guiding filter from the high-frequency image to obtain a final high-frequency rain and snow image
IHH=IH-IHL; (2-4)
Step 2.2: rain and snow image distinguishing module: the high-frequency image is divided into N same rectangles on average, and the name of each rectangle is set as alphan(N ═ 1, 2.. times, N), the grayscale value of each matrix is defined as
When the average value of all the matrix gray values is larger than the average gray threshold value and the variance is smaller than the set variance threshold value, judging that the infrared and RGB images corresponding to the high-frequency image are rain and snow images; the formulas (2-5) and (2-6) are used for judging whether the average value of the matrix gray scale is larger than the average gray scale threshold value, and the formula (2-7) is used for judging the variance formula of the matrix gray scale value:
Further, the rain and snow removing network in the step 3 is as follows:
step 3.1: removing rain and snow network structure: for the high-frequency images, the first layer is a feature extraction layer, the high-frequency images are divided into two paths by using cavity convolution to extract features, the convolution kernel is 3 x 3, the cavity convolution rate is 1 and 2, the perception fields of the two paths of cavity convolution are 3 x 3 and 5 x 5 respectively, ReLu activation functions are arranged after the convolution layers, then the feature maps are connected through a connecting layer, the second layer of convolution layers estimates the mapping relation of the feature maps between the rain and snow images and the rain and snow removing images, and finally the rain and snow removing high-frequency images are output through an output layer; for feature extraction of the low-frequency image, the first layer is a convolution layer with a convolution kernel of 3 x 3, the dimension is 512, the extracted image features estimate the mapping relation of feature maps between the rain and snow images and the rain and snow removing images through the second layer of convolution layer, the last layer of output layer outputs the rain and snow removing low-frequency image, and the high-frequency image and the low-frequency image obtained by output are added to obtain the rain and snow removing image;
step 3.2: a rain and snow removing network training process: the input low-frequency image and the input high-frequency image are S, the size of the used batch is set to be n0, the total iteration times are set to be e0, and the initial learning rate is set to be lr 0; respectively inputting the high-frequency image and the low-frequency image into respective networks for training, and if the loss values of the two networks are reduced all the time, continuing training until a final model is obtained after iteration is carried out for m times; and if the loss value tends to be stable in the midway, stopping iteration to obtain two final models. Further, the sand-leak convolutional neural network model in step 4 is:
step 4.1: hourglass convolution neural network structure: the hourglass network is a down-sampling-up-sampling structure shaped like an hourglass and comprises three network layers, namely a convolution layer, an up-sampling layer and a pooling layer; the convolutional layer uses a classical convolutional layer to carry out feature extraction, after an image is input, a feature graph is reduced to a resolution ratio of 2^ n/1 of the resolution ratio of the input image through convolution and pooling operation and is sampled through pooling operation, n is the number of hop-level paths, meanwhile, the feature graph before the downsampling is reserved through another path of convolution and is used for being fused with a feature graph of the same scale of an upsampling part on the right side, after the downsampling part reaches the minimum resolution ratio, a network is fused with the reserved feature graph of the same scale after the upsampling of the nearest neighbor, and finally, the network outputs a feature set representing the probability of each joint point appearing in the pixel;
step 4.2: hourglass convolutional neural network use: and inputting an aircraft skin picture, and providing a corner characteristic picture of the aircraft skin defect picture for corner prediction in a subsequent centripetal migration network.
Further, in step 5, the relationship-aware global channel attention mechanism model is:
step 5.1: relationship-aware global channel attention mechanism: for each feature position of the input corner feature map in the step 3, stacking various relations by using an RGA module, namely, relating the pairwise correlation of all the feature positions to the feature and learning attention by using a shallow neural network;
step 5.2: the attention mechanism is realized in detail: the input feature tensor is X ∈ RC×H×WObtaining N-H multiplied by W characteristic nodes according to the input characteristic graph, wherein each characteristic node xiHas a dimension of C, a pair-level dependency r from node i to node ji,jDefined as the point-by-point correlation in embedding space:
similarly, the corresponding correlation r from the node j to the node i can be obtainedj,i=fc(xj,xi) Use (r)i,j,rj,i) To describe xiAnd xjThen using the similarity matrix RS∈RN×NTo represent the pair-level dependencies of all nodes;
for the ith feature node, the pair-level dependencies of all nodes are stacked in an exact fixed order, with the node name j 1i=[Rs(i,:)Rs(:,i)]∈R2N,
Because of xiAnd the correlation vector is not in the same characteristic domain, so the correlation vector is obtained by converting the correlation vector by using the following formula and connecting the conversion vectors in series:
wherein the poolcIs a pooling layer, phisAndare respectively a feature xiSelf and global dependencies riThe embedding function of (2): 1x1 convolutional layer + batch normalization layer + ReLU activation function;
attention value a is then generated by mining valuable knowledge from the learned modelsi:
Wherein W1And W2The convolution operation is 1x1 and the batch normalization operation.
Further, the corner point predicting and feature adjusting module in step 6 is:
step 6.1: corner pooling layer: inputting the feature map obtained in the step 4 into a corner pooling layer, integrating the information of the object into an upper left corner point or a lower right corner point by corner pooling, wherein a pooling area of the upper left corner point is a feature point of the right side and the lower side of the object, and a pooling area of the lower right corner point is a feature point of the left side and the upper side of the object: assuming that the coordinates of a current point are (x, y), the width of the feature map is W, and the height is H, the corner pooling calculation process is as follows:
1. calculating the maximum value of the point to all points below the point, namely the maximum value of (x, y) to (x, H);
2. calculating the maximum value of all points from the point to the rightmost side, namely the maximum value of all points from (x, y) to (W, y);
3. adding the two maximum values to serve as the output of angular point pooling;
step 6.2: a corner point prediction module: inputting a feature map obtained by inputting a 3 × 3 convolutional layer and a batch normalization layer through a feature map of the corner pooling layer, adding feature map pixels of the 1 × 1 convolutional layer and the batch normalization layer of the feature map output in the step 4, and inputting the 3 × 3 convolutional layer, the batch normalization layer and a ReLU activation function after the feature map is subjected to a ReLU activation function to obtain a corner prediction result;
step 6.3: a characteristic adjusting module: firstly, inputting a feature map passing through a corner pooling layer into a cross-star deformable convolution module, learning the geometric structure of the cross-star deformable convolution, explicitly guiding the offset by using the size of a corresponding target object, and embedding a guide transfer, namely the offset from a corner to the center in the module.
Further, the centripetal shift module in step 7 is:
step 7.1: a centripetal shift module: generating corner candidates and corner offsets according to the step 5, then introducing a centripetal offset algorithm aiming at all the candidate corners, and generating a final prediction bounding box: the centripetal offset module predicts centripetal offset of the angular points and matches the angular points aligned with the offset result after the position decoding to form angular point pairs; then, a novel cross-star deformable convolution module is provided for carrying out feature self-adaptive selection and enriching the visual features of the corner positions;
step 7.2: corner matching: for matching corners, a matching method is designed using centripetal shifts and their positions, a pair of corners belonging to the same bounding box share the center of the box, once the corners are obtained from the corner heatmap and the local shift feature map, they are grouped into the same category and predicted bounding boxes are constructed, for each predicted box, its score is set to the geometric mean of its corner scores, obtained by applying softmax on the predicted corner heatmap, and then the center region of each bounding box is defined as follows to compare the proximity of the decoded center to the bounding box center:
ctlx represents the top-left x-coordinate, ctly represents the top-left y-coordinate, cbrx represents the bottom-right x-coordinate, cbry represents the bottom-right y-coordinate, where 0<Mu.ltoreq.1 indicates that the width and height of the central region are mu times the width and height of the bounding box, and the centers of the top left corner and bottom right corner (tl) are decoded by centripetal offset, respectivelyctx,tlcty)(brctx,brcty)。
Further, the whole process of step 8 is:
step 8.1: preparation of data set: respectively inputting a visible light image shot by a common camera and a gray image shot by a leading infrared camera into a guide filtering module to obtain a low-frequency high-frequency image; inputting the high-frequency image into a rain and snow image distinguishing module, inputting the rain and snow image into a trained rain and snow removing network, outputting a rain and snow removing visible light image and a gray level image, fusing the output images, and transmitting the fused images into a database module; directly carrying out image fusion processing on the non-rain and non-snow image, and transmitting the non-rain and non-snow image into a database module;
step 8.2: preprocessing a data set: taking data in the database module as input data of the steps 4, 5, 6 and 7, dividing the input data into a training set, a verification set and a test set according to the ratio of 8:1:1, and performing data enhancement by adopting a random horizontal turning method, a random scale scaling method, a random cutting method and a random color dithering method, wherein the scale scaling ratio is between 0.6 and 1.3;
step 8.3: network training: the size of an input image is S x S, the used batch size is set to be n1, the total iteration number is set to be 5000, the initial learning rate is set to be 0.01, the preprocessed data set is input into the network for training, if the loss value of the network is reduced all the time, the training is continued until the model of the aircraft skin detection network is obtained after the 5000 iterations.
The intelligent detection method for the defects of the aircraft skin based on the machine vision has the following advantages: the invention introduces a guide filtering and rain and snow image discriminator module which is used for distinguishing rain and snow images from non-rain and snow images. And establishing a rain and snow removing network model for obtaining a rain and snow removing image. The gray level image and the RGB image are subjected to image fusion, so that the influence caused by reflection can be effectively reduced. Through the global channel attention network, channel dimension weighting is carried out on the characteristics in the gray level image and the RGB image, and the influence caused by reflection is further reduced. The centripetal deviation module has a better effect on small targets and better helps to detect defects of the aircraft skin.
Drawings
FIG. 1 is a schematic diagram of an overall process of intelligent detection of an aircraft skin defect based on machine vision.
FIG. 2 is a schematic diagram of a step 4 hourglass convolutional neural network structure model of the present invention.
Detailed Description
In order to better understand the purpose, structure and function of the present invention, the following describes an aircraft skin defect intelligent detection method based on machine vision in further detail with reference to the accompanying drawings.
As shown in fig. 1, an intelligent detection method for aircraft skin defects based on machine vision mainly includes the following steps:
step 1: the aircraft ground service personnel use a common camera and an infrared camera to shoot aircraft skin, the shot RGB images and infrared images are uploaded to a back-end database, a high-quality aircraft skin image data set is obtained through strict screening, data expansion is achieved through operations such as zooming, rotating, cutting, mirroring and perspective change, the richness of a sample is expanded, and meanwhile the robustness of the sample to external shooting conditions is improved. For the problem of unbalance of the defect sample, the method adopts a mapping mode to enhance data.
Step 2: and respectively inputting the infrared image data and the RGB image data into a guide filtering module to respectively obtain corresponding low-frequency and high-frequency images. Inputting high-frequency images of the two data into a rain and snow image distinguishing module, and directly carrying out image fusion on the infrared image and the RGB image when the output of the module is not a rain and snow image, and forming a new database with the image obtained in the step 3; when the module outputs rain and snow images, the high and low frequency images of the infrared image and the RGB image are input into a rain and snow removing network.
And step 3: and finally, combining the output low-frequency image without rain and snow with the high-frequency image to obtain a restored image without rain and snow. And the infrared image and the RGB image are subjected to image fusion.
And 4, step 4: and inputting the image data obtained through preprocessing into an hourglass convolution neural network for re-estimating the posture and extracting the features.
And 5: and inputting the feature graph processed and output by the backbone network into a relation perception global channel attention network to weight the channel dimensions of the visible light features and the infrared features in the image.
Step 6: and inputting the feature map output by the channel attention network into a corner point prediction and feature adjustment module, and finally integrating the information of the object into an upper left corner point or a lower right corner point.
And 7: and (5) inputting the predicted corner points and the adaptive features obtained in the step (5) into a centripetal deviation module, predicting the deviation amount of each corner point and the central point, and performing corner point matching according to the predicted corner points and the centripetal deviation amount. In the matching process, if the positions of the corner points obtained after the movement are close enough, a bounding box with higher confidence can be obtained.
And 8: ground service personnel at the airport use the aircraft skin defect detection method as an auxiliary means of aircraft maintenance, and timely looking over damage type, size and position that appear promotes ground service personnel's work efficiency, reduces artifical working strength, and the reduction condition of examining neglected checking takes place, the flight safety of guarantee aircraft.
Further, the guiding filtering module and the rain and snow image distinguishing module in the step 2 are as follows:
step 2.1: the guiding filtering module: the method decomposes the rain image by using the guide filter, has high calculation efficiency and strong edge protection, and has the following principle:
the rain image I is guided and filtered to obtain a low-frequency background image ILAnd high frequency image IH:
I=IH+IL (2-1)
After obtaining the low-frequency image, subtracting the low-frequency image from the original image to obtain a high-frequency image:
IH=I-IL (2-2)
the rain and snow textures mainly exist in a high-frequency part of the image, and in order to make rain and snow information of the high-frequency part more obvious and reduce background misjudgment in rain and snow characteristic learning, the high-frequency image is extracted again.
High frequency image IHPassing through the guide filter again to obtain a new low-frequency background image IHLAnd high frequency image IHH:
IH=IHL+IHH (2-3)
Then subtracting the image passing through the guiding filter from the high-frequency image to obtain a final high-frequency rain and snow image
IHH=IH-IHL (2-4)
Step 2.2: rain and snow image distinguishing module: the high-frequency image is divided into N same rectangles on average, and the name of each rectangle is set as alphan(N ═ 1, 2.. times, N), the grayscale value of each matrix is defined as
And when the average value of all the matrix gray values is greater than the average gray threshold value and the variance is less than the set variance threshold value, judging that the infrared and RGB images corresponding to the high-frequency image are rain and snow images. The formulas (2-5) and (2-6) are used for judging whether the average value of the matrix gray scale is larger than the average gray scale threshold value, and the formula (2-7) is used for judging the variance formula of the matrix gray scale value.
Further, the rain and snow removing network in the step 3 is as follows:
step 3.1: removing rain and snow network structure: for the high-frequency image, the first layer is a feature extraction layer, and the features are extracted from the high-frequency image in two paths by utilizing the cavity convolution. The convolution kernel is 3 × 3, the void convolution rate is 1 and 2, and the perception fields of the two paths of void convolution are 3 × 3 and 5 × 5 respectively. There is a ReLu activation function after the convolutional layers. And then, connecting the feature maps through a connecting layer, estimating the mapping relation of the feature maps between the rain and snow images and the rain and snow removing images through a second layer of convolution layer, and finally outputting the rain and snow removing high-frequency images through an output layer. For feature extraction of low frequency images, the first layer is the convolution layer with a convolution kernel of 3 x 3, dimension 512. The mapping relation of the feature map between the rain and snow images and the rain and snow removing images is estimated through the second layer of convolution layer by the extracted image features, and the rain and snow removing low-frequency image is output by the last layer of output layer. And adding the high-frequency image and the low-frequency image obtained by output to finally obtain the image for removing rain and snow.
Step 3.2: a rain and snow removing network training process: the input low-frequency image and the input high-frequency image are S, the size of the used batch is set to be n0, the total iteration times are set to be e0, and the initial learning rate is set to be lr 0; and respectively inputting the high-frequency image and the low-frequency image into respective networks for training. If the loss values of the two networks are reduced all the time, continuing training until a final model is obtained after iteration is performed for m times; if the loss value tends to be stable in the midway, stopping iteration to obtain two final models;
further, as shown in fig. 2, the sand-leak convolutional neural network model in step 4 is:
step 4.1: hourglass convolution neural network structure: the hourglass network is a down-sampling-up-sampling structure shaped like an hourglass and comprises three network layers, namely a convolution layer, an up-sampling layer and a pooling layer. The convolutional layer was characterized using a classical convolutional layer. After the image is input, the feature map is reduced to 2^ n/1(n is the number of the skipping roads, the skipping road means that only one convolution layer with the kernel scale of 1 does not change the data size, only changes the data depth and mainly keeps the original hierarchical information) of the resolution downsampling of the input image resolution by convolution and pooling operation, and the feature map before downsampling is kept by convolution of the other path and is used for being fused with the feature map with the same scale as the upsampling part on the right side. After the downsampled partial feature map reaches the minimum resolution, the network is fused with the retained same-scale feature map after being upsampled by nearest neighbor, and finally the network outputs a feature set representing the probability of each joint point appearing in the pixel. The structural design of the hourglass convolutional neural network aims to acquire information contained in images under different scales.
Step 4.2: hourglass convolutional neural network use: and inputting an aircraft skin picture, and providing a good corner characteristic map of the aircraft skin defect picture for corner prediction in a subsequent centripetal migration network.
Further, in step 5, the relationship-aware global channel attention mechanism model is as follows:
step 5.1: relationship-aware global channel attention mechanism introduction: for each feature position of the input corner feature map in the step 3, in order to more compactly capture global structural information and local appearance information, an RGA module is used for stacking various relations, namely, the pairwise correlation of all feature positions is connected with the feature itself, and a shallow neural network is used for learning attention. By applying the RGA module, the feature representation capability can be obviously enhanced, so that the feature with higher discriminability can be learned.
Step 5.2: the attention mechanism is realized in detail: the input feature tensor is X ∈ RC×H×WObtaining N-H multiplied by W characteristic nodes according to the input characteristic graph, wherein each characteristic node xiIs C. Pairwise dependencies r from node i to node ji,jDefined as the point-by-point correlation in the embedding space.
Similarly, the corresponding correlation r from the node j to the node i can be obtainedj,i=fc(xj,xi) Use (r)i,j,rj,i) To describe xiAnd xjThen using the similarity matrix RS∈RN×NTo represent the pair-level dependencies of all nodes.
For the ith feature node, the pair-level dependencies of all nodes are stacked in an exact fixed order, where the node name is j 1i=[Rs(i,:)Rs(:,i)]∈R2N.
Because of xiAnd the correlation vector is not in the same characteristic domain, so the correlation vector is obtained by converting the correlation vector by using the following formula and connecting the conversion vectors in series:
wherein the poolcIs a pooling layer, phisAndare respectively a feature xiSelf and global dependencies riEmbedded function of (1x1 convolutional layer + batch normalization layer + ReLU activation function).
Attention value a is then generated by mining valuable knowledge from the learned modelsi。
Wherein W1And W2The convolution operation is 1x1 and the batch normalization operation.
Further, in step 6, the corner point predicting and feature adjusting module is:
step 6.1: corner pooling layer: inputting the feature map obtained in the step 4 into a corner pooling layer, wherein the corner pooling can integrate the information of the object into an upper left corner point or a lower right corner point, the pooling area of the upper left point is the feature point on the right side and below the upper left point, and the pooling area of the lower right point is the feature point on the left side and above the lower right point.
Taking one example of a feature point: assuming that the coordinates of the current point are (x, y), the width of the feature map is W, and the height is H, the corner pooling calculation process is as follows:
1. the maximum value of the point to all points below it, i.e., (x, y) to (x, H), is calculated.
2. The maximum of all points from this point to its rightmost side, i.e., (x, y) to (W, y), is calculated.
3. The two maxima are added as the output of corner pooling.
Step 6.2: a corner point prediction module: and (3) inputting a feature map obtained by inputting a 3 × 3 convolution layer and a batch normalization layer through the feature map of the corner pooling layer, adding the feature map output in the step (4) with the feature map pixel points of the 1 × 1 convolution layer and the batch normalization layer, and inputting the 3 × 3 convolution layer, the batch normalization layer and the ReLU activation function after the feature map is subjected to the ReLU activation function to obtain a corner prediction result.
Step 6.3: a characteristic adjusting module: the method for using the cross star deformable convolution as the characteristic adjusting module comprises the steps of firstly, inputting a characteristic graph passing through the corner pooling layer into the cross star deformable convolution module. To learn the geometry of the "cross-star" deformable convolution, the size of the corresponding target object can be used to explicitly guide the offset, since the shape of the "cross-star" is related to the shape of the bounding box. However, since there is more useless information outside the object, a guide transition, i.e. an offset from the corner to the center, is embedded in the module.
Further, the centripetal shift module in step 7 is:
step 7.1: a centripetal shift module: corner candidates and corner offsets are generated according to step 5. Then, aiming at all candidate corner points, a centripetal shift algorithm is introduced to pursue high-quality corner point pairs and generate a final prediction bounding box. Specifically, the centripetal shift module predicts the centripetal shift of the corner points and matches the corner points aligned with the position decoded shift result to form corner point pairs. Then, a novel cross star-shaped deformable convolution module is provided, the convolution offset is obtained from the offset of the corner points to the corresponding centers, so that feature self-adaptive selection can be carried out, the visual features of the corner point positions are enriched, and the method is important for improving the accuracy of centripetal offset.
Step 7.2: corner matching: to match the corner points, a matching method is designed using centripetal offset and its location. It is intuitive and reasonable that a pair of corner points belonging to the same bounding box should share the center of the box. Since the corresponding predicted corner centers can be decoded from their positions and centripetal offsets, it is easy to compare whether the centers of a pair of corner points are sufficiently close and near to the center of the bounding box consisting of the corner points. Once the corners are obtained from the corner heat map and the local offset feature map, they are grouped into the same category and a predicted bounding box is constructed. For each prediction box, its score is set to the geometric mean of its corner scores, which is obtained by applying softmax on the predicted corner heatmap. The center region of each bounding box is then defined as the following equation to compare the proximity of the decoded center to the bounding box center.
ctlx represents the top-left x-coordinate, ctly represents the top-left y-coordinate, cbrx represents the bottom-right x-coordinate, cbry represents the bottom-right y-coordinate, where 0<μ ≦ 1 indicates that the width and height of the center region are μ times the width and height of the bounding box. By centripetal offset, the centers of the top left and bottom right corners (tl) can be decoded separatelyctx,tlcty)(brctx,brcty)。
Further, the whole training process of the aircraft skin detection system comprises the following steps:
1) preparation of data set: and respectively inputting the visible light image shot by the common camera and the gray image shot by the dominant infrared camera into the guide filtering module to obtain a low-frequency high-frequency image. Inputting the high-frequency image into a rain and snow image distinguishing module, inputting the rain and snow image into a trained rain and snow removing network, outputting a rain and snow removing visible light image and a gray level image, fusing the output images, and transmitting the fused images into a database module. And directly carrying out image fusion processing on the non-rain and non-snow image, and transmitting the non-rain and non-snow image into a database module.
2) Preprocessing a data set: and taking the data in the database module as input data of the steps 4, 5, 6 and 7, dividing the input data into a training set, a verification set and a test set according to a ratio of 8:1:1, and performing data enhancement by adopting methods of random horizontal inversion, random scale scaling (the proportion is between 0.6 and 1.3), random cutting, random color dithering and the like.
3) Network training: the size of an input image is S x S, the used batch size is set to be n1, the total iteration number is set to be 5000, the initial learning rate is set to be 0.01, the preprocessed data set is input into the network for training, if the loss value of the network is reduced all the time, the training is continued until the model of the aircraft skin detection network is obtained after the 5000 iterations.
It is to be understood that the present invention has been described with reference to certain embodiments, and that various changes in the features and embodiments, or equivalent substitutions may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (8)
1. An aircraft skin defect intelligent detection method based on machine vision is characterized by comprising the following steps:
step 1: the method comprises the following steps that aircraft ground service personnel use a common camera and an infrared camera to shoot aircraft skin, the shot RGB images and infrared images are uploaded to a back-end database, a high-quality aircraft skin image data set is obtained through screening, operations of zooming, rotating, cutting, mirroring and perspective change are carried out, and data enhancement is carried out on a defect sample in a map pasting mode;
step 2: respectively inputting the infrared image data and the RGB image data into a guide filtering module to respectively obtain corresponding low-frequency and high-frequency images, inputting the high-frequency images of the two data into a rain and snow image distinguishing module, and directly carrying out image fusion on the infrared image and the RGB image when the output of the module is not a rain and snow image, and forming a new database with the image obtained in the step 3; when the module outputs rain and snow images, inputting high and low frequency images of the infrared image and the RGB image into a rain and snow removing network;
and step 3: the rain and snow removing network extracts the characteristics of the high-frequency image and the low-frequency image, finally combines the output low-frequency image without rain and snow with the high-frequency image to obtain a restored image without rain and snow, and performs image fusion on the infrared image and the RGB image;
and 4, step 4: inputting the image data obtained through preprocessing into an hourglass convolution neural network for re-estimating the posture and extracting the characteristics;
and 5: inputting the feature graph processed and output by the backbone network into a relation perception global channel attention network to weight the channel dimensions of the visible light feature and the infrared feature in the image;
step 6: inputting the feature map output by the channel attention network into a corner point prediction and feature adjustment module, and finally integrating the information of the object into a left upper corner point or a right lower corner point;
and 7: inputting the predicted corner points and the adaptive features obtained in the step 5 into a centripetal deviation module, predicting the deviation amount of each corner point and the central point, and performing corner point matching according to the predicted corner points and the centripetal deviation amount;
and 8: and training an aircraft skin detection system.
2. The intelligent aircraft skin defect detection method based on machine vision according to claim 1, wherein the guiding filtering module and the sleet image distinguishing module in the step 2 are as follows:
step 2.1: the guiding filtering module: the rain image is decomposed by a guiding filter, and the principle is as follows:
the rain image I is guided and filtered to obtain a low-frequency background image ILAnd high frequency image IH:
I=IH+IL (2-1)
After obtaining the low-frequency image, subtracting the low-frequency image from the original image to obtain a high-frequency image:
IH=I-IL (2-2)
high frequency image IHPassing through the guide filter again to obtain a new low-frequency background image IHLAnd high frequency image IHH:
IH=IHL+IHH (2-3)
Then subtracting the image passing through the guiding filter from the high-frequency image to obtain a final high-frequency rain and snow image
IHH=IH-IHL; (2-4)
Step 2.2: rain and snow image distinguishing module: the high-frequency image is divided into N same rectangles on average, and the name of each rectangle is set as alphan(N ═ 1, 2.. times, N), the grayscale value of each matrix is defined as
When the average value of all the matrix gray values is larger than the average gray threshold value and the variance is smaller than the set variance threshold value, judging that the infrared and RGB images corresponding to the high-frequency image are rain and snow images; the formulas (2-5) and (2-6) are used for judging whether the average value of the matrix gray scale is larger than the average gray scale threshold value, and the formula (2-7) is used for judging the variance formula of the matrix gray scale value:
3. The intelligent aircraft skin defect detection method based on machine vision as claimed in claim 2, wherein the rain and snow removing network in step 3 is:
step 3.1: removing rain and snow network structure: for the high-frequency images, the first layer is a feature extraction layer, the high-frequency images are divided into two paths by using cavity convolution to extract features, the convolution kernel is 3 x 3, the cavity convolution rate is 1 and 2, the perception fields of the two paths of cavity convolution are 3 x 3 and 5 x 5 respectively, ReLu activation functions are arranged after the convolution layers, then the feature maps are connected through a connecting layer, the second layer of convolution layers estimates the mapping relation of the feature maps between the rain and snow images and the rain and snow removing images, and finally the rain and snow removing high-frequency images are output through an output layer; for feature extraction of the low-frequency image, the first layer is a convolution layer with a convolution kernel of 3 x 3, the dimension is 512, the extracted image features estimate the mapping relation of feature maps between the rain and snow images and the rain and snow removing images through the second layer of convolution layer, the last layer of output layer outputs the rain and snow removing low-frequency image, and the high-frequency image and the low-frequency image obtained by output are added to obtain the rain and snow removing image;
step 3.2: a rain and snow removing network training process: the input low-frequency image and the input high-frequency image are S, the size of the used batch is set to be n0, the total iteration times are set to be e0, and the initial learning rate is set to be lr 0; respectively inputting the high-frequency image and the low-frequency image into respective networks for training, and if the loss values of the two networks are reduced all the time, continuing training until a final model is obtained after iteration is carried out for m times; and if the loss value tends to be stable in the midway, stopping iteration to obtain two final models.
4. The intelligent detection method for the skin defects of the aircraft based on the machine vision as claimed in claim 3, wherein the sand-leakage convolutional neural network model in the step 4 is as follows:
step 4.1: hourglass convolution neural network structure: the hourglass network is a down-sampling-up-sampling structure shaped like an hourglass and comprises three network layers, namely a convolution layer, an up-sampling layer and a pooling layer; the convolutional layer uses a classical convolutional layer to carry out feature extraction, after an image is input, a feature graph is reduced to a resolution ratio of 2^ n/1 of the resolution ratio of the input image through convolution and pooling operation and is sampled through pooling operation, n is the number of hop-level paths, meanwhile, the feature graph before the downsampling is reserved through another path of convolution and is used for being fused with a feature graph of the same scale of an upsampling part on the right side, after the downsampling part reaches the minimum resolution ratio, a network is fused with the reserved feature graph of the same scale after the upsampling of the nearest neighbor, and finally, the network outputs a feature set representing the probability of each joint point appearing in the pixel;
step 4.2: hourglass convolutional neural network use: and inputting an aircraft skin picture, and providing a corner characteristic picture of the aircraft skin defect picture for corner prediction in a subsequent centripetal migration network.
5. The machine-vision-based intelligent detection method for the skin defects of the aircraft as claimed in claim 4, wherein the relationship-aware global channel attention mechanism model in the step 5 is as follows:
step 5.1: relationship-aware global channel attention mechanism: for each feature position of the input corner feature map in the step 3, stacking various relations by using an RGA module, namely, relating the pairwise correlation of all the feature positions to the feature and learning attention by using a shallow neural network;
step 5.2: the attention mechanism is realized in detail: the input feature tensor is X ∈ RC×H×WObtaining N-H multiplied by W characteristic nodes according to the input characteristic graph, wherein each characteristic node xiHas a dimension of C, a pair-level dependency r from node i to node ji,jDefined as the point-by-point correlation in embedding space:
similarly, the corresponding correlation r from the node j to the node i can be obtainedj,i=fc(xj,xi) Use (r)i,j,rj,i) To describe xiAnd xjThen using the similarity matrix RS∈RN×NTo represent the pair-level dependencies of all nodes;
for the ith feature node, the pair-level dependencies of all nodes are stacked in an exact fixed order, with the node name j 1i=[Rs(i,:)Rs(:,i)]∈R2N,
Because of xiAnd the correlation vector is not in the same characteristic domain, so the correlation vector is obtained by converting the correlation vector by using the following formula and connecting the conversion vectors in series:
wherein the poolcIs a pooling layer, phisAndare respectively a feature xiSelf and global dependencies riThe embedding function of (2): 1x1 convolutional layer + batch normalization layer + ReLU activation function;
attention value a is then generated by mining valuable knowledge from the learned modelsi:
Wherein W1And W2The convolution operation is 1x1 and the batch normalization operation.
6. The machine-vision-based intelligent detection method for the skin defects of the aircraft as claimed in claim 5, wherein the corner point prediction and feature adjustment module in the step 6 is:
step 6.1: corner pooling layer: inputting the feature map obtained in the step 4 into a corner pooling layer, integrating the information of the object into an upper left corner point or a lower right corner point by corner pooling, wherein a pooling area of the upper left corner point is a feature point of the right side and the lower side of the object, and a pooling area of the lower right corner point is a feature point of the left side and the upper side of the object:
assuming that the coordinates of a current point are (x, y), the width of the feature map is W, and the height is H, the corner pooling calculation process is as follows:
1. calculating the maximum value of the point to all points below the point, namely the maximum value of (x, y) to (x, H);
2. calculating the maximum value of all points from the point to the rightmost side, namely the maximum value of all points from (x, y) to (W, y);
3. adding the two maximum values to serve as the output of angular point pooling;
step 6.2: a corner point prediction module: inputting a feature map obtained by inputting a 3 × 3 convolutional layer and a batch normalization layer through a feature map of the corner pooling layer, adding feature map pixels of the 1 × 1 convolutional layer and the batch normalization layer of the feature map output in the step 4, and inputting the 3 × 3 convolutional layer, the batch normalization layer and a ReLU activation function after the feature map is subjected to a ReLU activation function to obtain a corner prediction result;
step 6.3: a characteristic adjusting module: firstly, inputting a feature map passing through a corner pooling layer into a cross-star deformable convolution module, learning the geometric structure of the cross-star deformable convolution, explicitly guiding the offset by using the size of a corresponding target object, and embedding a guide transfer, namely the offset from a corner to the center in the module.
7. The machine-vision-based intelligent detection method for the skin defects of the aircraft as claimed in claim 6, wherein the centripetal deviation module in the step 7 is:
step 7.1: a centripetal shift module: generating corner candidates and corner offsets according to the step 5, then introducing a centripetal offset algorithm aiming at all the candidate corners, and generating a final prediction bounding box: the centripetal offset module predicts centripetal offset of the angular points and matches the angular points aligned with the offset result after the position decoding to form angular point pairs; then, a novel cross-star deformable convolution module is provided for carrying out feature self-adaptive selection and enriching the visual features of the corner positions;
step 7.2: corner matching: for matching corners, a matching method is designed using centripetal shifts and their positions, a pair of corners belonging to the same bounding box share the center of the box, once the corners are obtained from the corner heatmap and the local shift feature map, they are grouped into the same category and predicted bounding boxes are constructed, for each predicted box, its score is set to the geometric mean of its corner scores, obtained by applying softmax on the predicted corner heatmap, and then the center region of each bounding box is defined as follows to compare the proximity of the decoded center to the bounding box center:
ctlx represents the top-left x-coordinate, ctly represents the top-left y-coordinate, cbrx represents the bottom-right x-coordinate, cbry represents the bottom-right y-coordinate, where 0<Mu.ltoreq.1 indicates that the width and height of the central region are mu times the width and height of the bounding box, and the centers of the top left corner and bottom right corner (tl) are decoded by centripetal offset, respectivelyctx,tlcty)(brctx,brcty)。
8. The machine-vision-based intelligent detection method for the skin defects of the aircraft as claimed in claim 7, wherein the full process of step 8 is as follows:
step 8.1: preparation of data set: respectively inputting a visible light image shot by a common camera and a gray image shot by a leading infrared camera into a guide filtering module to obtain a low-frequency high-frequency image; inputting the high-frequency image into a rain and snow image distinguishing module, inputting the rain and snow image into a trained rain and snow removing network, outputting a rain and snow removing visible light image and a gray level image, fusing the output images, and transmitting the fused images into a database module; directly carrying out image fusion processing on the non-rain and non-snow image, and transmitting the non-rain and non-snow image into a database module;
step 8.2: preprocessing a data set: taking data in the database module as input data of the steps 4, 5, 6 and 7, dividing the input data into a training set, a verification set and a test set according to the ratio of 8:1:1, and performing data enhancement by adopting a random horizontal turning method, a random scale scaling method, a random cutting method and a random color dithering method, wherein the scale scaling ratio is between 0.6 and 1.3;
step 8.3: network training: the size of an input image is S x S, the used batch size is set to be n1, the total iteration number is set to be 5000, the initial learning rate is set to be 0.01, the preprocessed data set is input into the network for training, if the loss value of the network is reduced all the time, the training is continued until the model of the aircraft skin detection network is obtained after the 5000 iterations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111457448.2A CN114119586A (en) | 2021-12-01 | 2021-12-01 | Intelligent detection method for aircraft skin defects based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111457448.2A CN114119586A (en) | 2021-12-01 | 2021-12-01 | Intelligent detection method for aircraft skin defects based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114119586A true CN114119586A (en) | 2022-03-01 |
Family
ID=80369909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111457448.2A Withdrawn CN114119586A (en) | 2021-12-01 | 2021-12-01 | Intelligent detection method for aircraft skin defects based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114119586A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114535451A (en) * | 2022-04-22 | 2022-05-27 | 南通精丰智能设备有限公司 | Intelligent bending machine control method and system for heat exchanger production |
CN114596290A (en) * | 2022-03-11 | 2022-06-07 | 腾讯科技(深圳)有限公司 | Defect detection method, defect detection device, storage medium, and program product |
CN115063725A (en) * | 2022-06-23 | 2022-09-16 | 中国民航大学 | Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm |
CN115082434A (en) * | 2022-07-21 | 2022-09-20 | 浙江华是科技股份有限公司 | Multi-source feature-based magnetic core defect detection model training method and system |
CN117788471A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for detecting and classifying aircraft skin defects based on YOLOv5 |
CN117934309A (en) * | 2024-03-18 | 2024-04-26 | 昆明理工大学 | Unregistered infrared visible image fusion method based on modal dictionary and feature matching |
-
2021
- 2021-12-01 CN CN202111457448.2A patent/CN114119586A/en not_active Withdrawn
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114596290A (en) * | 2022-03-11 | 2022-06-07 | 腾讯科技(深圳)有限公司 | Defect detection method, defect detection device, storage medium, and program product |
CN114596290B (en) * | 2022-03-11 | 2024-08-27 | 腾讯科技(深圳)有限公司 | Defect detection method and device, storage medium, and program product |
CN114535451A (en) * | 2022-04-22 | 2022-05-27 | 南通精丰智能设备有限公司 | Intelligent bending machine control method and system for heat exchanger production |
CN114535451B (en) * | 2022-04-22 | 2022-06-28 | 南通精丰智能设备有限公司 | Intelligent bending machine control method and system for heat exchanger production |
CN115063725A (en) * | 2022-06-23 | 2022-09-16 | 中国民航大学 | Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm |
CN115063725B (en) * | 2022-06-23 | 2024-04-26 | 中国民航大学 | Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm |
CN115082434A (en) * | 2022-07-21 | 2022-09-20 | 浙江华是科技股份有限公司 | Multi-source feature-based magnetic core defect detection model training method and system |
CN115082434B (en) * | 2022-07-21 | 2022-12-09 | 浙江华是科技股份有限公司 | Multi-source feature-based magnetic core defect detection model training method and system |
CN117788471A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for detecting and classifying aircraft skin defects based on YOLOv5 |
CN117788471B (en) * | 2024-02-27 | 2024-04-26 | 南京航空航天大学 | YOLOv 5-based method for detecting and classifying aircraft skin defects |
CN117934309A (en) * | 2024-03-18 | 2024-04-26 | 昆明理工大学 | Unregistered infrared visible image fusion method based on modal dictionary and feature matching |
CN117934309B (en) * | 2024-03-18 | 2024-05-24 | 昆明理工大学 | Unregistered infrared visible image fusion method based on modal dictionary and feature matching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114119586A (en) | Intelligent detection method for aircraft skin defects based on machine vision | |
WO2021088300A1 (en) | Rgb-d multi-mode fusion personnel detection method based on asymmetric double-stream network | |
CN108304873B (en) | Target detection method and system based on high-resolution optical satellite remote sensing image | |
CN111709416B (en) | License plate positioning method, device, system and storage medium | |
CN110163213B (en) | Remote sensing image segmentation method based on disparity map and multi-scale depth network model | |
CN107239730B (en) | Quaternion deep neural network model method for intelligent automobile traffic sign recognition | |
CN111242864B (en) | Finger vein image restoration method based on Gabor texture constraint | |
CN103942557B (en) | A kind of underground coal mine image pre-processing method | |
CN115311241B (en) | Underground coal mine pedestrian detection method based on image fusion and feature enhancement | |
CN110929593A (en) | Real-time significance pedestrian detection method based on detail distinguishing and distinguishing | |
CN110060273B (en) | Remote sensing image landslide mapping method based on deep neural network | |
CN106373146A (en) | Target tracking method based on fuzzy learning | |
CN111274964B (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle | |
CN113989613A (en) | Light-weight high-precision ship target detection method coping with complex environment | |
CN112115871B (en) | High-low frequency interweaving edge characteristic enhancement method suitable for pedestrian target detection | |
CN114495010A (en) | Cross-modal pedestrian re-identification method and system based on multi-feature learning | |
CN112560852A (en) | Single-stage target detection method with rotation adaptive capacity based on YOLOv3 network | |
CN114495170A (en) | Pedestrian re-identification method and system based on local self-attention inhibition | |
CN101286236B (en) | Infrared object tracking method based on multi- characteristic image and average drifting | |
CN115909110A (en) | Lightweight infrared unmanned aerial vehicle target tracking method based on Simese network | |
Peng et al. | Incorporating generic and specific prior knowledge in a multiscale phase field model for road extraction from VHR images | |
Chen et al. | Visual depth guided image rain streaks removal via sparse coding | |
CN114332644A (en) | Large-view-field traffic density acquisition method based on video satellite data | |
CN114155165A (en) | Image defogging method based on semi-supervision | |
CN116935249A (en) | Small target detection method for three-dimensional feature enhancement under unmanned airport scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220301 |
|
WW01 | Invention patent application withdrawn after publication |