CN117252842A - Aircraft skin defect detection and network model training method - Google Patents
Aircraft skin defect detection and network model training method Download PDFInfo
- Publication number
- CN117252842A CN117252842A CN202311259366.6A CN202311259366A CN117252842A CN 117252842 A CN117252842 A CN 117252842A CN 202311259366 A CN202311259366 A CN 202311259366A CN 117252842 A CN117252842 A CN 117252842A
- Authority
- CN
- China
- Prior art keywords
- feature map
- image
- model
- aircraft skin
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 143
- 238000001514 detection method Methods 0.000 title claims abstract description 127
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000012549 training Methods 0.000 title claims abstract description 45
- 238000012545 processing Methods 0.000 claims abstract description 113
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 62
- 238000010586 diagram Methods 0.000 claims description 21
- 238000002372 labelling Methods 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 238000005286 illumination Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 abstract description 8
- 230000008447 perception Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/04—Constraint-based CAD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an aircraft skin defect detection and network model training method, which comprises the following steps: acquiring an aircraft skin image as an object to be detected; inputting an object to be detected into an aircraft skin defect detection network for defect detection, in the defect detection, processing the object to be detected through a YOLOV7 network to obtain three feature images with different dimensions, respectively performing dynamic attention processing and decoupling detection processing to obtain corresponding decoupling feature images, and performing feature fusion to obtain an image with a detection frame and a defect label, wherein the image with the detection frame and the defect label is used as an aircraft skin defect detection result; the aircraft skin defect detection network is obtained by training an aircraft skin defect detection network model based on the constructed dynamic attention semi-supervision loss function. Compared with the prior art, the feature extraction capability is enhanced, the detection precision of small defects is improved, and the cost of manually marking data is greatly reduced.
Description
Technical Field
The invention relates to the field of computer vision, in particular to an aircraft skin defect detection and network model training method.
Background
Aircraft skin defect detection is an important aviation safety inspection task, the purpose of which is to detect and identify defects and damage to the aircraft skin, to ensure the structural integrity and safety of the aircraft. The traditional aircraft skin defect detection mainly depends on visual inspection of detection personnel, and the method has low detection efficiency and is greatly influenced by subjective factors of the detection personnel. In recent years, with the development of computer vision technology and deep learning, aircraft skin defect detection technology based on deep learning has been widely studied and applied. However, most of the currently mainstream methods are based on a fully supervised training mode, and the method requires a large amount of labeling data to train a model, so that on one hand, it is difficult to collect a large amount of images of skin defects of an aircraft, and on the other hand, a large amount of time is required to label the images. Therefore, there is a need to develop a semi-supervised aircraft skin defect detection method that can train a model using unlabeled aircraft skin images.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The technical problem to be solved by the application is to provide the aircraft skin defect detection and network model training method, which has the characteristics of improving the detection capability of small defects and reducing the labor cost.
In a first aspect, an embodiment provides a method for detecting a skin defect of an aircraft, including:
acquiring an aircraft skin image as an object to be detected;
inputting an object to be detected into an aircraft skin defect detection network for defect detection, wherein the method comprises the following steps:
processing an object to be detected by a YOLOV7 network to obtain three feature graphs with different dimensions, wherein the feature graphs comprise: a first feature map, a second feature map, and a third feature map;
respectively carrying out dynamic attention processing on the first feature map, the second feature map and the third feature map to correspondingly obtain a first dynamic attention feature map, a second dynamic attention feature map and a third dynamic attention feature map;
respectively performing decoupling detection processing on the first dynamic attention feature map, the second dynamic attention feature map and the third dynamic attention feature map to obtain a corresponding first decoupling feature map, a corresponding second decoupling feature map and a corresponding third decoupling feature map;
performing feature fusion on the first decoupling feature map, the second decoupling feature map and the third decoupling feature map to obtain an image with a detection frame and a defect label;
taking the image with the detection frame and the defect label as a result of aircraft skin defect detection;
and performing model training on the aircraft skin defect detection network based on the constructed dynamic attention semi-supervision loss function to obtain the aircraft skin defect detection network.
In one embodiment, for any dynamic attention profile, the decoupling detection process includes:
the input characteristic diagram is divided into two branches after 3×3 convolution processing; in one branch, after two times of 3×3 convolution processing, a feature map with classification scores is obtained through one 1×1 convolution processing; in the other branch, after two times of 3×3 convolution processing, two branches are processed, and each branch is processed by a 1×1 convolution processing, so as to obtain a corresponding feature map with regression score and a feature map with object score;
and carrying out feature fusion on the feature map with the classification score, the feature map with the regression score and the feature map with the objectivity score to obtain a decoupling feature map.
In one embodiment, the method for training the aircraft skin defect detection network model comprises the following steps:
data acquisition, gather aircraft skin image, include: acquiring aircraft skin surface images of different illumination conditions and different damage stages;
classifying the acquired image data according to the defect types, and forming a first image data set by selecting partial images meeting the defect definition requirement in each classification, and forming a second image data set by the rest images;
image preprocessing, namely preprocessing an image in a first image data set, including filtering and denoising preprocessing;
performing defect marking, namely performing defect marking on the preprocessed image;
constructing a semi-supervised defect detection network model based on dynamic decoupling attention, wherein the semi-supervised defect detection network model comprises a teacher model and a student model;
and training a network model, namely taking the image in the second image data set and the image in the first image data set after the defect labeling as inputs, carrying out model training by combining a dynamic attention semi-supervision loss function based on the teacher model and the student model so as to obtain a trained student model, and taking the student model as an aircraft skin defect detection network.
In one embodiment, the training of the model by combining the dynamic attention semi-supervised loss function based on the teacher model and the student model by taking the image in the second image data set and the image in the first image data set after the defect labeling as inputs includes:
performing first strong data enhancement processing on the images in the second image data set;
the image subjected to the first strong data enhancement processing and the index sliding average value obtained based on the student model are input into a teacher model for pseudo tag giving processing;
the pseudo labels are endowed with the processed feature images, self-adaptive label distribution processing is carried out, and reliable pseudo labels and uncertain pseudo labels are obtained;
performing second strong data enhancement processing on the image subjected to the first strong data enhancement processing;
performing weak data enhancement processing on the images in the first image dataset after the defect labeling;
inputting the second image subjected to the strong data enhancement processing, the second image subjected to the weak data enhancement processing and the fed-back loss value into a student model for carrying out sliding index average processing and real label giving processing so as to correspondingly obtain an index sliding average value and a characteristic diagram with a real label;
and calculating a loss value based on the loss function, the reliable pseudo tag and the characteristic parameter of the characteristic map after the pseudo tag is endowed with the processing and the characteristic parameter with the real tag characteristic map, and feeding back the calculated loss value to the student model for training.
In one embodiment, the calculating the loss value based on the loss function, the reliable pseudo tag, the feature parameter of the feature map after the pseudo tag assignment process, and the feature parameter of the feature map with the real tag includes:
L=L s +λL u
wherein L represents the total loss function, L s Representing a supervised loss function, L u Representing a semi-supervised loss function, λ being a balance factor between said supervised and semi-supervised loss functions, being a super-parameter;CE is the cross entropy loss function, ioU is the regression loss function,>the classification score is represented by the position (h, w) of the label on the feature map obtained by the student model,regression score at position (h, w) of label on feature map representing student model, ∈>Object score at position (h, w) of label on feature graph obtained by student model,/>The classification score of the label position (h, w) on the feature map obtained by the teacher model,/for the feature map>Representing regression scores at positions (h, w) of pseudo tags on the feature map obtained by the teacher model,representing the objectivity score at the position (h, w) of the label on the feature map obtained by the teacher model; />Wherein (1)>
Wherein (1)>For classifying loss->In order to return the loss to the original state,is a loss of objectivity; />The classification score, regression score and objectivity score obtained from the adaptive pseudo tag assignment samples at the locations (h, w) of the reliable pseudo tags on the feature map are represented, respectively.
In one embodiment, the first strong data enhancement process includes a Mixup data enhancement process.
In one embodiment, the second strong data enhancement process includes a Mosai data enhancement process.
In a second aspect, in one embodiment, a method for training an aircraft skin defect detection network model is provided, including:
data acquisition, gather aircraft skin image, include: acquiring aircraft skin surface images of different illumination conditions and different damage stages;
classifying the acquired image data according to the defect types, and forming a first image data set by selecting partial images meeting the defect definition requirement in each classification, and forming a second image data set by the rest images;
image preprocessing, namely preprocessing an image in a first image data set, including filtering and denoising preprocessing;
performing defect marking, namely performing defect marking on the preprocessed image;
constructing a semi-supervised defect detection network model based on dynamic decoupling attention, wherein the semi-supervised defect detection network model comprises a teacher model and a student model;
taking the images in the second image data set and the images in the first image data set after the defect labeling as inputs, carrying out model training by combining a dynamic attention semi-supervision loss function based on the teacher model and the student model so as to obtain a trained student model, and taking the student model as an aircraft skin defect detection network.
In one embodiment, the training of the model by combining the dynamic attention semi-supervised loss function based on the teacher model and the student model by taking the image in the second image data set and the image in the first image data set after the defect labeling as inputs includes:
performing first strong data enhancement processing on the images in the second image data set;
the image subjected to the first strong data enhancement processing and the index sliding average value obtained based on the student model are input into a teacher model for pseudo tag giving processing;
the pseudo labels are endowed with the processed feature images, self-adaptive label distribution processing is carried out, and reliable pseudo labels and uncertain pseudo labels are obtained;
performing second strong data enhancement processing on the image subjected to the first strong data enhancement processing;
performing weak data enhancement processing on the images in the first image dataset after the defect labeling;
inputting the second image subjected to the strong data enhancement processing, the second image subjected to the weak data enhancement processing and the fed-back loss value into a student model for carrying out sliding index average processing and real label giving processing so as to correspondingly obtain an index sliding average value and a characteristic diagram with a real label;
and calculating a loss value based on the loss function, the reliable pseudo tag and the characteristic parameter of the characteristic map after the pseudo tag is endowed with the processing and the characteristic parameter with the real tag characteristic map, and feeding back the calculated loss value to the student model for training.
In one embodiment, the calculating the loss value based on the loss function, the reliable pseudo tag, the feature parameter of the feature map after the pseudo tag assignment process, and the feature parameter of the feature map with the real tag includes:
L=L s +λL u
wherein L represents the total loss function, L s Representing a supervised loss function, L u Representing a semi-supervised loss function, λ being a balance factor between said supervised and semi-supervised loss functions, being a super-parameter;CE is the cross entropy loss function, ioU is the regression loss function,>the classification score is represented by the position (h, w) of the label on the feature map obtained by the student model,regression score at position (h, w) of label on feature map representing student model, ∈>Indicating the objectivity score at the position (h, w) of the label on the feature map obtained by the student model,/>The classification score of the label position (h, w) on the feature map obtained by the teacher model,/for the feature map>Regression score at position (h, w) of pseudo tag on feature map obtained by teacher model,/>Representing the objectivity score at the position (h, w) of the label on the feature map obtained by the teacher model;wherein,
wherein (1)>For classifying loss->In order to return the loss to the original state,is a loss of objectivity; />Representing the assignment of reliable pseudo tags from adaptive pseudo tags at positions (h, w) on a feature map, respectivelyThe classification score, regression score and objectivity score obtained by sampling.
The beneficial effects of the invention are as follows:
based on the dynamic decoupling detection process, the feature extraction capability is enhanced, and the detection precision of the small defects is improved. In addition, the aircraft skin defect detection network based on the detection method is obtained by training an aircraft skin defect detection network model based on the constructed dynamic attention semi-supervision loss function, and can be obtained by training the unlabeled image data, so that the cost of manually labeling the data is greatly reduced.
Drawings
FIG. 1 is a flow chart of a method for inspecting aircraft skin defects according to one embodiment of the present application;
FIG. 2 is a schematic illustration of an aircraft skin defect detection network according to one embodiment of the present application;
FIG. 3 is a schematic diagram of the decoupling detection network of FIG. 2 of the present application;
FIG. 4 is a schematic diagram of a defect detection process performed by inputting an object to be detected into an aircraft skin defect detection network according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a teacher model in an aircraft skin defect detection network training model according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a student model structure in an aircraft skin defect detection network training model according to one embodiment of the present application;
FIG. 7 is a schematic structural diagram of an aircraft skin defect detection network training model according to one embodiment of the present application.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning.
For convenience in explaining the inventive concept of the present application, a brief explanation of the aircraft skin defect detection technique is provided below.
The traditional aircraft skin defect detection mainly depends on visual inspection of detection personnel, and the method has low detection efficiency and is greatly influenced by subjective factors of the detection personnel. In recent years, with the development of computer vision technology and deep learning, aircraft skin defect detection technology based on deep learning has been widely studied and applied. However, most of the currently mainstream methods are based on a fully supervised training mode, and the method requires a large amount of labeling data to train a model, so that on one hand, it is difficult to collect a large amount of images of skin defects of an aircraft, and on the other hand, a large amount of time is required to label the images.
Based on the above, the application provides an aircraft skin defect detection method, and the aircraft skin defect detection network model based on the method can be obtained by training label-free image data, so that the cost of manually marking the data is greatly reduced, and in addition, the introduced dynamic attention can improve the detection capability of small defects. Referring to fig. 1, the defect detection method includes:
step S10, an aircraft skin image is acquired as an object to be detected.
And S20, inputting the object to be detected into an aircraft skin defect detection network to detect the defects. Referring to fig. 2, the defect detection process includes:
processing an object to be detected by a YOLOV7 network to obtain three feature graphs with different dimensions, wherein the feature graphs comprise: a first feature map, a second feature map, and a third feature map. The YOLOV7 network comprises a trunk feature extraction network and a detection head module, and an input object to be detected is used for obtaining a first feature map, a second feature map and a third feature map with three different dimensions after the trunk feature extraction and the detection head processing. In fig. 2, H and W denote the height and width of the feature map.
And respectively carrying out dynamic attention processing on the first feature map, the second feature map and the third feature map to correspondingly obtain a first dynamic attention feature map, a second dynamic attention feature map and a third dynamic attention feature map.
And respectively performing decoupling detection processing on the first dynamic attention feature map, the second dynamic attention feature map and the third dynamic attention feature map to obtain a corresponding first decoupling feature map, a corresponding second decoupling feature map and a corresponding third decoupling feature map.
In one embodiment, referring to fig. 3, for any dynamic attention profile, the decoupling detection process includes:
the input characteristic diagram is divided into two branches after 3×3 convolution processing; in one branch, after two times of 3×3 convolution processing, a feature map with classification scores is obtained through one 1×1 convolution processing; in the other branch, after two times of 3×3 convolution processing, two branches are processed, and each branch is processed by one 1×1 convolution processing, so as to obtain a corresponding feature map with regression score and a feature map with object score.
The decoupling detection process combines scale perception attention, space perception attention and task perception attention to form a dynamic decoupling detection process, so that the feature extraction capability is enhanced, and the detection precision of small defects is improved.
And carrying out feature fusion on the feature map with the classification score, the feature map with the regression score and the feature map with the objectivity score to obtain a decoupling feature map, and correspondingly, carrying out feature fusion on the obtained first decoupling feature map, second decoupling feature map and third decoupling feature map to obtain an image with a detection frame and a defect label.
And step S30, taking the image with the detection frame and the defect label as a result of aircraft skin defect detection.
The aircraft skin defect detection network is obtained by training an aircraft skin defect detection network model based on the constructed dynamic attention semi-supervision loss function.
Based on the aircraft skin defect detection method, the dynamic decoupling detection process formed by combining the scale perception attention, the space perception attention and the task perception attention is improved, the feature extraction capability is enhanced, and the detection precision of small defects is improved. In addition, the aircraft skin defect detection network based on the detection method is obtained by training an aircraft skin defect detection network model based on the constructed dynamic attention semi-supervision loss function, and can be obtained by training the unlabeled image data, so that the cost of manually labeling the data is greatly reduced.
In one embodiment, a method for training an aircraft skin defect detection network model is provided, please refer to fig. 4, which includes:
step S201, data acquisition, acquiring an aircraft skin image, including: and acquiring the surface images of the aircraft skin at different damage stages under different illumination conditions.
Because the aircraft skin surface images at different damage stages are acquired under different illumination conditions, the accuracy of the trained network model in detecting the aircraft skin images under different illumination conditions can be improved.
Step S202, classifying the data, classifying the collected image data according to the defect types, and forming a first image data set by selecting partial images meeting the defect definition requirement in each classification, and forming a second image data set by the rest images.
The defect types of the aircraft skin can be classified into scratch, rivet damage, paint surface falling, rust and the like. And selecting the obtained images with defects according to defect classification, and forming a first image data set by using part of images with higher definition in the selected images with the defect classification, and forming a second image data set by using all the selected images with higher definition. The definition requirements may be set according to actual requirements, or may be set according to the ratio requirements of the first image dataset and the second image dataset.
In step S203, the image preprocessing is performed to preprocess the image in the first image dataset, including filtering and denoising preprocessing.
And obtaining the aircraft skin image meeting the requirements through preprocessing such as filtering, denoising and the like.
And S204, marking the defects, and marking the defects of the preprocessed image.
In one embodiment, the aircraft skin surface images in the preprocessed first image dataset may be image tagged using a LabelImg image tagging tool, stored in a txt suffix file. In the defect labeling process, boundary frames (such as rectangular frames) can be used for selecting the surface defects of the aircraft skin, and labels are marked according to the types of the surface defects of the aircraft skin.
In one embodiment, the method further comprises the step of using a data enhancement method to make and enrich the image after the defect labeling. Methods of using data enhancement include one or any of flipping, rotating, scaling, mosaic, and MixUp enhancements.
In step S205, a semi-supervised defect detection network model based on dynamic decoupling attention is constructed, and the semi-supervised defect detection network model includes a teacher model and a student model.
The constructed semi-supervised defect detection network model based on dynamic decoupling attention allows the network model to utilize unlabeled image data. The introduced dynamic decoupling detector can improve the detection capability of the model on small defects, so that the detection accuracy of the model is further improved.
Step S206, training a network model, namely taking the image in the second image data set and the image in the first image data set after the defect labeling as inputs, carrying out model training by combining a dynamic attention semi-supervision loss function based on the teacher model and the student model to obtain a trained student model, and taking the student model as an aircraft skin defect detection network.
In one embodiment, referring to fig. 5, 6 and 7, the specific method of step S206 may include:
the images in the second image data set are subjected to a first strong data enhancement process. In one embodiment, the first strong data enhancement process comprises a Mixup data enhancement process.
And inputting the image subjected to the first strong data enhancement processing and the index sliding average value obtained based on the student model into a teacher model for pseudo tag giving processing.
In one embodiment, please refer to the teacher model of fig. 5, the processing procedure of the teacher model includes taking the exponential sliding average value of the image and the student module parameters in the second image data set after the Mixup data enhancement processing as input, and obtaining three feature graphs with different dimensions after the feature graphs are processed by the YOLOV7 network, including: a first feature map, a second feature map, and a third feature map. And respectively carrying out dynamic attention processing on the first feature map, the second feature map and the third feature map to correspondingly obtain a first dynamic attention feature map, a second dynamic attention feature map and a third dynamic attention feature map. And respectively performing decoupling detection processing on the first dynamic attention characteristic diagram, the second dynamic attention characteristic diagram and the third dynamic attention characteristic diagram.
In the decoupling detection process, please refer to fig. 2, the input feature map is divided into two branches after 3×3 convolution processing; in one branch, after two times of 3×3 convolution processing, a feature map with classification scores is obtained through one 1×1 convolution processing; in the other branch, after two times of 3×3 convolution processing, two branches are processed, and each branch is processed by one 1×1 convolution processing, so as to obtain a corresponding feature map with regression score and a feature map with object score.
Feature fusion is carried out on the feature graphs with the classification scores obtained through decoupling detection processing to obtain classification scores at positions (h, w) of pseudo tags on the feature graphsFeature fusion is carried out on the feature graphs with regression scores obtained through decoupling detection processing to obtain regression scores +.>Feature fusion is carried out on the feature graphs with the objectivity scores obtained through decoupling detection processing to obtain the objectivity score at the position (h, w) of the pseudo tag on the feature graphs>
And giving the pseudo tag to the processed feature map, and performing self-adaptive tag distribution processing to obtain a reliable pseudo tag and an uncertain pseudo tag. The self-adaptive label distribution processing method can be realized by adopting the method in the prior art, the obtained reliable pseudo label can be used for calculating a loss function, wherein the classification score obtained from self-adaptive pseudo label distribution sampling at the position (h, w) of the reliable pseudo label on the feature map can be obtained based on the reliable pseudo labelRegression score +.A regression score from adaptive pseudo tag assignment samples at the location (h, w) of reliable pseudo tags on feature map>Objectivity score +.A.The objectivity score from adaptive pseudo tag assignment sampling at the location (h, w) of reliable pseudo tags on feature map>
And performing second strong data enhancement processing on the image subjected to the first strong data enhancement processing. In one embodiment, the second strong data enhancement process comprises a Mosai data enhancement process.
And carrying out weak data enhancement processing on the images in the first image data set after the defect labeling. In one embodiment, the weak data enhancement process includes any one or more of flipping, rotating, and scaling.
And inputting the image subjected to the second strong data enhancement processing, the image subjected to the weak data enhancement processing and the fed-back loss value into a student model to perform sliding index average processing and real label giving processing so as to correspondingly obtain an index sliding average value and a characteristic map with real labels.
Referring to the student model of fig. 6, the processing procedure of the student model includes inputting the second image after the enhancement processing of the strong data, the second image after the enhancement processing of the weak data and the loss value of the feedback to the student model, and obtaining three feature diagrams with different dimensions after the processing of the YOLOV7 network, including: a first feature map, a second feature map, and a third feature map. And respectively carrying out dynamic attention processing on the first feature map, the second feature map and the third feature map to correspondingly obtain a first dynamic attention feature map, a second dynamic attention feature map and a third dynamic attention feature map. And respectively performing decoupling detection processing on the first dynamic attention characteristic diagram, the second dynamic attention characteristic diagram and the third dynamic attention characteristic diagram.
In the decoupling detection process, please refer to fig. 2, the input feature map is divided into two branches after 3×3 convolution processing; in one branch, after two times of 3×3 convolution processing, a feature map with classification scores is obtained through one 1×1 convolution processing; in the other branch, after two times of 3×3 convolution processing, two branches are processed, and each branch is processed by one 1×1 convolution processing, so as to obtain a corresponding feature map with regression score and a feature map with object score.
Feature fusion is carried out on the feature graphs with the classification scores obtained through decoupling detection processing to obtain classification scores at the positions (h, w) of the labels on the feature graphsFeature fusion is carried out on the feature graphs with regression scores obtained through decoupling detection processing to obtain regression scores +.>Feature fusion is carried out on the feature graphs with the objectivity scores obtained through decoupling detection processing to obtain the objectivity scores at the positions (h, w) of the labels on the feature graphsBased on the obtained scores, the method is used for calculating an index moving average index updating teacher model and calculating a loss value by combining other parameters.
Exponential moving average is a parameter smoothing technique that reduces optimized parameter ripple noise and makes parameters more likely to approach local minima. In one embodiment, the calculation of the exponential sliding average value is performed using prior art methods.
And calculating a loss value based on the loss function, the reliable pseudo tag and the characteristic parameter of the characteristic map after the pseudo tag is endowed with the processing and the characteristic parameter with the real tag characteristic map, and feeding back the calculated loss value to the student model for training. Wherein calculating the loss value comprises:
L=L s +λL u
wherein L represents the total loss function, L s Representing a supervised loss function, L u Representing a semi-supervised loss function, λ being a balance factor between said supervised and semi-supervised loss functions, being a super-parameter;CE is the cross entropy loss function, ioU is the regression loss function,>the classification score is represented by the position (h, w) of the label on the feature map obtained by the student model,regression score at position (h, w) of label on feature map representing student model, ∈>Indicating the objectivity score at the position (h, w) of the label on the feature map obtained by the student model,/>The classification score of the label position (h, w) on the feature map obtained by the teacher model,/for the feature map>Regression score at position (h, w) of pseudo tag on feature map obtained by teacher model,/>Representing the objectivity score at the position (h, w) of the label on the feature map obtained by the teacher model; />Wherein,
wherein (1)>For classifying loss->In order to return the loss to the original state,is a loss of objectivity; />The classification score, regression score and objectivity score obtained from the adaptive pseudo tag assignment samples at the locations (h, w) of the reliable pseudo tags on the feature map are represented, respectively.
In the process, a semi-supervised aircraft skin defect detection model frame based on dynamic attention is firstly input with label-free skin image data and label skin image data at the same time; and then respectively adopting different data enhancement modes to enhance the two image data: the method comprises the steps of enabling label-free skin image data to be subjected to Mixup data enhancement, enabling an enhanced skin image data set to be directly input into a teacher model (the teacher model is a dynamic decoupling detector based on dynamic attention), enabling the teacher model to be subjected to strong data enhancement, enabling the label-free skin image data to be subjected to weak data enhancement, and enabling the label-free skin image data to be input into a student model together with the label-free skin image data subjected to strong data enhancement (the student model is the dynamic decoupling detector based on dynamic attention); assigning a pseudo tag to the label-free skin image data according to training of a teacher model, and dividing the pseudo tag into a reliable pseudo tag and an uncertain pseudo tag according to a pseudo tag distributor (Pseudo Label Aissgner) after generating the pseudo tag; and calculating loss between the two obtained pseudo tag types and a real tag based on tag skin image data in the student model so as to optimize training of the student model, wherein in the training process, the student model updates parameters of the teacher model through exponential sliding average.
The dynamic decoupling detector based on dynamic attention is mainly used for constructing a teacher model and a student model in a semi-supervised aircraft skin defect detection model. The network model of the teacher model introduces a dynamic attention and decoupling detection head based on the YOLOV7 model. The input of the teacher model is an exponential sliding average of the unlabeled skin image data and the student model weights after data enhancement. The index sliding average value of the student model weight is used for updating parameters in the teacher model and optimizing the teacher model; after a series of feature extraction operations, the label-free skin image data is input into a decoupling detection head, and at the moment, the output of the decoupling detection head is a feature map with pseudo labels and three different scales. The network model structure of the student model is consistent with the teacher model, and the input of the student model is label-free skin image data after strong data enhancement, label skin image data after weak data enhancement and the gradient of a loss function, wherein the gradient of the loss function is used for optimizing parameters of the student model so as to train the student model better; the label-free skin image and the label skin image are input into a decoupling detection head through a plurality of characteristic extraction operations, at the moment, the output of the decoupling detection head is also three characteristic diagrams with different scales, and because the label skin image is input into the student model, the output characteristic diagram contains the category information and the position information of defects, the category information and the position information are extracted, the category information and the position information of pseudo labels input by the teacher model are calculated, the gradient of the loss function is reversely transmitted into the student model, and the student model is optimized. The other output of the student model is the weights of the network parameters in the training process of the student model, and the weight index sliding average value is transmitted into the teacher model for updating the parameters of the teacher model and optimizing the teacher model.
An embodiment of the present application provides a computer readable storage medium having a program stored thereon, where the stored program includes a method that can be loaded by a processor and processed in any of the above embodiments.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized. In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.
Claims (10)
1. The method for detecting the defects of the aircraft skin is characterized by comprising the following steps of:
acquiring an aircraft skin image as an object to be detected;
inputting an object to be detected into an aircraft skin defect detection network for defect detection, wherein the method comprises the following steps:
processing an object to be detected by a YOLOV7 network to obtain three feature graphs with different dimensions, wherein the feature graphs comprise: a first feature map, a second feature map, and a third feature map;
respectively carrying out dynamic attention processing on the first feature map, the second feature map and the third feature map to correspondingly obtain a first dynamic attention feature map, a second dynamic attention feature map and a third dynamic attention feature map;
respectively performing decoupling detection processing on the first dynamic attention feature map, the second dynamic attention feature map and the third dynamic attention feature map to obtain a corresponding first decoupling feature map, a corresponding second decoupling feature map and a corresponding third decoupling feature map;
performing feature fusion on the first decoupling feature map, the second decoupling feature map and the third decoupling feature map to obtain an image with a detection frame and a defect label;
taking the image with the detection frame and the defect label as a result of aircraft skin defect detection;
and performing model training on the aircraft skin defect detection network based on the constructed dynamic attention semi-supervision loss function to obtain the aircraft skin defect detection network.
2. The aircraft skin defect detection method of claim 1, wherein the decoupling detection process comprises, for any one of the dynamic attention profile:
the input characteristic diagram is divided into two branches after 3×3 convolution processing; in one branch, after two times of 3×3 convolution processing, a feature map with classification scores is obtained through one 1×1 convolution processing; in the other branch, after two times of 3×3 convolution processing, two branches are processed, and each branch is processed by a 1×1 convolution processing, so as to obtain a corresponding feature map with regression score and a feature map with object score;
and carrying out feature fusion on the feature map with the classification score, the feature map with the regression score and the feature map with the objectivity score to obtain a decoupling feature map.
3. The aircraft skin defect detection method of claim 1, wherein the training method of the aircraft skin defect detection network model comprises:
data acquisition, gather aircraft skin image, include: acquiring aircraft skin surface images of different illumination conditions and different damage stages;
classifying the acquired image data according to the defect types, and forming a first image data set by selecting partial images meeting the defect definition requirement in each classification, and forming a second image data set by the rest images;
image preprocessing, namely preprocessing an image in a first image data set, including filtering and denoising preprocessing;
performing defect marking, namely performing defect marking on the preprocessed image;
constructing a semi-supervised defect detection network model based on dynamic decoupling attention, wherein the semi-supervised defect detection network model comprises a teacher model and a student model;
and training a network model, namely taking the image in the second image data set and the image in the first image data set after the defect labeling as inputs, carrying out model training by combining a dynamic attention semi-supervision loss function based on the teacher model and the student model so as to obtain a trained student model, and taking the student model as an aircraft skin defect detection network.
4. A method for inspecting aircraft skin defects according to claim 3, wherein said model training with dynamic attention semi-supervised loss function based on said teacher model and student model using as input the images in the second image dataset and the images in the first image dataset after defect labeling comprises:
performing first strong data enhancement processing on the images in the second image data set;
the image subjected to the first strong data enhancement processing and the index sliding average value obtained based on the student model are input into a teacher model for pseudo tag giving processing;
the pseudo labels are endowed with the processed feature images, self-adaptive label distribution processing is carried out, and reliable pseudo labels and uncertain pseudo labels are obtained;
performing second strong data enhancement processing on the image subjected to the first strong data enhancement processing;
performing weak data enhancement processing on the images in the first image dataset after the defect labeling;
inputting the second image subjected to the strong data enhancement processing, the second image subjected to the weak data enhancement processing and the fed-back loss value into a student model for carrying out sliding index average processing and real label giving processing so as to correspondingly obtain an index sliding average value and a characteristic diagram with a real label;
and calculating a loss value based on the loss function, the reliable pseudo tag and the characteristic parameter of the characteristic map after the pseudo tag is endowed with the processing and the characteristic parameter with the real tag characteristic map, and feeding back the calculated loss value to the student model for training.
5. The method for detecting aircraft skin defects according to claim 4, wherein calculating the loss value based on the loss function, the reliable pseudo tag, the feature parameters of the feature map after the pseudo tag assignment process, and the feature parameters with the true tag feature map comprises:
L=L s +λL u
wherein L represents the total loss function, L s Representing a supervised loss function, L u Representing a semi-supervised loss function, λ being a balance factor between said supervised and semi-supervised loss functions, being a super-parameter;CE is the cross entropy loss function, ioU is the regression loss function,>the classification score is represented by the position (h, w) of the label on the feature map obtained by the student model,regression score at position (h, w) of label on feature map representing student model, ∈>Indicating the objectivity score at the position (h, w) of the label on the feature map obtained by the student model,/>The classification score of the label position (h, w) on the feature map obtained by the teacher model,/for the feature map>Regression score at position (h, w) of pseudo tag on feature map obtained by teacher model,/>Representing the objectivity score at the position (h, w) of the label on the feature map obtained by the teacher model;wherein (1)> Wherein (1)>For classifying loss->For regression loss->Is a loss of objectivity; />The classification score, regression score and objectivity score obtained from the adaptive pseudo tag assignment samples at the locations (h, w) of the reliable pseudo tags on the feature map are represented, respectively.
6. The aircraft skin defect detection method of claim 4, wherein said first strong data enhancement process comprises a Mixup data enhancement process.
7. The aircraft skin defect detection method of claim 4, wherein said second strong data enhancement process comprises a Mosai data enhancement process.
8. The training method of the aircraft skin defect detection network model is characterized by comprising the following steps of:
data acquisition, gather aircraft skin image, include: acquiring aircraft skin surface images of different illumination conditions and different damage stages;
classifying the acquired image data according to the defect types, and forming a first image data set by selecting partial images meeting the defect definition requirement in each classification, and forming a second image data set by the rest images;
image preprocessing, namely preprocessing an image in a first image data set, including filtering and denoising preprocessing;
performing defect marking, namely performing defect marking on the preprocessed image;
constructing a semi-supervised defect detection network model based on dynamic decoupling attention, wherein the semi-supervised defect detection network model comprises a teacher model and a student model;
taking the images in the second image data set and the images in the first image data set after the defect labeling as inputs, carrying out model training by combining a dynamic attention semi-supervision loss function based on the teacher model and the student model so as to obtain a trained student model, and taking the student model as an aircraft skin defect detection network.
9. The method for detecting aircraft skin defects according to claim 8, wherein said model training with a dynamic attention semi-supervised loss function based on the teacher model and the student model using the images in the second image dataset and the images in the first image dataset after defect labeling as inputs comprises:
performing first strong data enhancement processing on the images in the second image data set;
the image subjected to the first strong data enhancement processing and the index sliding average value obtained based on the student model are input into a teacher model for pseudo tag giving processing;
the pseudo labels are endowed with the processed feature images, self-adaptive label distribution processing is carried out, and reliable pseudo labels and uncertain pseudo labels are obtained;
performing second strong data enhancement processing on the image subjected to the first strong data enhancement processing;
performing weak data enhancement processing on the images in the first image dataset after the defect labeling;
inputting the second image subjected to the strong data enhancement processing, the second image subjected to the weak data enhancement processing and the fed-back loss value into a student model for carrying out sliding index average processing and real label giving processing so as to correspondingly obtain an index sliding average value and a characteristic diagram with a real label;
and calculating a loss value based on the loss function, the reliable pseudo tag and the characteristic parameter of the characteristic map after the pseudo tag is endowed with the processing and the characteristic parameter with the real tag characteristic map, and feeding back the calculated loss value to the student model for training.
10. The method for detecting aircraft skin defects according to claim 4, wherein calculating the loss value based on the loss function, the reliable pseudo tag, the feature parameters of the feature map after the pseudo tag assignment process, and the feature parameters with the true tag feature map comprises:
L=L s +λL u
wherein L represents the total loss function, L s Representing a supervised loss function, L u Representing a semi-supervised loss function, λ being a balance factor between said supervised and semi-supervised loss functions, being a super-parameter;CE is the cross entropy loss function, ioU is the regression loss function,>the classification score is represented by the position (h, w) of the label on the feature map obtained by the student model,regression score at position (h, w) of label on feature map representing student model, ∈>Indicating the objectivity score at the position (h, w) of the label on the feature map obtained by the student model,/>The classification score of the label position (h, w) on the feature map obtained by the teacher model,/for the feature map>Regression score at position (h, w) of pseudo tag on feature map obtained by teacher model,/>Representing the objectivity score at the position (h, w) of the label on the feature map obtained by the teacher model; />Wherein (1)> Wherein (1)>For classifying loss->For regression loss->Is a loss of objectivity; />Respectively representing reliable pseudo-marks on feature graphsThe locations (h, w) of the labels are assigned a sampled classification score, regression score and objectivity score from the adaptive pseudo-labels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311259366.6A CN117252842A (en) | 2023-09-27 | 2023-09-27 | Aircraft skin defect detection and network model training method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311259366.6A CN117252842A (en) | 2023-09-27 | 2023-09-27 | Aircraft skin defect detection and network model training method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117252842A true CN117252842A (en) | 2023-12-19 |
Family
ID=89132751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311259366.6A Pending CN117252842A (en) | 2023-09-27 | 2023-09-27 | Aircraft skin defect detection and network model training method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117252842A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117726628A (en) * | 2024-02-18 | 2024-03-19 | 青岛理工大学 | Steel surface defect detection method based on semi-supervised target detection algorithm |
CN117788471A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for detecting and classifying aircraft skin defects based on YOLOv5 |
-
2023
- 2023-09-27 CN CN202311259366.6A patent/CN117252842A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117726628A (en) * | 2024-02-18 | 2024-03-19 | 青岛理工大学 | Steel surface defect detection method based on semi-supervised target detection algorithm |
CN117726628B (en) * | 2024-02-18 | 2024-04-19 | 青岛理工大学 | Steel surface defect detection method based on semi-supervised target detection algorithm |
CN117788471A (en) * | 2024-02-27 | 2024-03-29 | 南京航空航天大学 | Method for detecting and classifying aircraft skin defects based on YOLOv5 |
CN117788471B (en) * | 2024-02-27 | 2024-04-26 | 南京航空航天大学 | YOLOv 5-based method for detecting and classifying aircraft skin defects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109857889B (en) | Image retrieval method, device and equipment and readable storage medium | |
US11010838B2 (en) | System and method for optimizing damage detection results | |
KR101856120B1 (en) | Discovery of merchants from images | |
Li et al. | Localizing and quantifying damage in social media images | |
CN117252842A (en) | Aircraft skin defect detection and network model training method | |
WO2020238256A1 (en) | Weak segmentation-based damage detection method and device | |
CN108550054B (en) | Content quality evaluation method, device, equipment and medium | |
CN108711148A (en) | A kind of wheel tyre defect intelligent detecting method based on deep learning | |
CN112149663A (en) | RPA and AI combined image character extraction method and device and electronic equipment | |
Ghosh et al. | Automated detection and classification of pavement distresses using 3D pavement surface images and deep learning | |
CN111694957B (en) | Method, equipment and storage medium for classifying problem sheets based on graph neural network | |
CN117392042A (en) | Defect detection method, defect detection apparatus, and storage medium | |
CN115019133A (en) | Method and system for detecting weak target in image based on self-training and label anti-noise | |
CN108805181B (en) | Image classification device and method based on multi-classification model | |
CN111898528B (en) | Data processing method, device, computer readable medium and electronic equipment | |
CN116468690B (en) | Subtype analysis system of invasive non-mucous lung adenocarcinoma based on deep learning | |
Artan et al. | Car damage analysis for insurance market using convolutional neural networks | |
CN113780335B (en) | Small sample commodity image classification method, device, equipment and storage medium | |
CN111612890A (en) | Method and device for automatically generating three-dimensional model from two-dimensional house type diagram and electronic equipment | |
CN112131418A (en) | Target labeling method, target labeling device and computer-readable storage medium | |
CN113887567B (en) | Vegetable quality detection method, system, medium and equipment | |
Ngo et al. | Designing image processing tools for testing concrete bridges by a drone based on deep learning | |
CN117474457B (en) | Intelligent auxiliary system for dangerous chemical and industrial and trade equipment emergency management law enforcement inspection | |
Lystbæk et al. | Removing Unwanted Text from Architectural Images with Multi-Scale Deformable Attention-Based Machine Learning | |
Chen | Development of image recognition system for steel defects detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |