CN116452549A - Simplified and unified contact net defect detection method and system - Google Patents
Simplified and unified contact net defect detection method and system Download PDFInfo
- Publication number
- CN116452549A CN116452549A CN202310430316.3A CN202310430316A CN116452549A CN 116452549 A CN116452549 A CN 116452549A CN 202310430316 A CN202310430316 A CN 202310430316A CN 116452549 A CN116452549 A CN 116452549A
- Authority
- CN
- China
- Prior art keywords
- simplified
- image
- defect detection
- encoder
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 130
- 238000001514 detection method Methods 0.000 title claims abstract description 125
- 238000005520 cutting process Methods 0.000 claims abstract description 66
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims description 54
- 238000000034 method Methods 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 26
- 238000012937 correction Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 230000003190 augmentative effect Effects 0.000 claims description 3
- 230000002950 deficient Effects 0.000 claims description 3
- 238000011144 upstream manufacturing Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 54
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 230000015654 memory Effects 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 3
- 238000013526 transfer learning Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 235000005770 birds nest Nutrition 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 235000005765 wild carrot Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a simplified and unified contact network defect detection method and system, which adopts a mixed cutting mode to amplify a simplified and unified contact network image, performs multi-scale feature extraction on the simplified and unified contact network image, trains an upstream self-supervision model by utilizing multi-scale feature contrast loss and dense contrast loss, converts the trained self-supervision model into a downstream defect detection model to be trained, detects the simplified and unified contact network image to be detected by adopting the trained defect detection model, can obtain a high-precision detection result, and is suitable for the conditions of lack of target samples and more defect categories.
Description
Technical Field
The invention relates to a simplified and unified contact net defect detection method and system, and belongs to the technical field of image processing.
Background
Along with the continuous high-speed rail of modern technology, the construction of high-speed rail is very different, and the traditional railway overhead contact system has finished advancing to the simplified and unified overhead contact system at present. However, as the erection of the high-speed rail line expands, the high-speed rail line becomes more complex, the maintenance work for the high-speed rail line becomes very difficult, defects and faults generated in the high-speed rail line are more and more, and the defects such as bird nests, foreign matters, missing parts and the like are detected, the position where the contact net breaks down is timely detected and positioned, so that the high-speed rail line is very necessary to ensure safe operation.
For defect detection of a high-speed rail simplified contact net, a camera device is erected on the high-speed rail contact net to collect 2C or 4C images, an analysis room analyzes the images, defects or faults possibly existing in the images are manually selected, then the images are collected to a maintenance department to record, maintenance personnel are dispatched to conduct one-by-one investigation, and the method is very dependent on experienced staff, is long in response time and is easy to occur in the situations of false detection, omission detection and the like.
In recent years, although many researchers are researching on defect detection problems of a high-speed rail simplified contact net, due to various railway running environments, the types and types of contained defects are very complex, the labeling of defect samples is time-consuming and labor-consuming, and an effective target detection defect library and defect detection means are difficult to form. Therefore, for simplifying defect detection of the overhead line system, in order to adapt to various types of defect detection, it is highly desirable to design a defect detection method which can still maintain a good detection effect under the conditions that target samples are lack and defect types are more.
Disclosure of Invention
The invention provides a simplified and unified contact net defect detection method and system, which solve the problems disclosed in the background technology.
In order to solve the technical problems, the invention adopts the following technical scheme:
a simplified contact net defect detection method comprises the following steps:
acquiring a simplified and unified contact net image;
inputting the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set when the self-supervision model is trained, multi-scale feature extraction is carried out on the simplified and unified contact network image, and the training is carried out by utilizing multi-scale feature contrast loss and dense contrast loss according to the extracted multi-scale features.
The method for amplifying the simplified and unified contact net image in the training set by adopting the mixed cutting mode comprises the following steps:
in the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
and updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct the position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set.
The loss function of the self-supervision model is:
wherein L is a self-supervision model loss function calculated value, N M For the number of multi-scale feature contrast losses, N L L is the total loss number mfc Calculating a value for a multi-scale feature contrast loss function, L dc For global image I q 、I k Intensive contrast loss function calculation value between, global image I q 、I k Different views generated by the same simplified contact net image in a mixed cutting mode are obtained;
W i weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k Weighting the loss for the j-th scale feature contrast, N m The number of the multi-scale features;
is I q 、I k A comparison loss function calculation value between +.>Is I q Through the ith dimension feature of the encoder, < >>Representation and I q Is opposite to I k Through the ith dimension feature of the encoder, T is the temperature super-parameter, +.>Is equal to I q N-th I of negative pair k Through the ith dimension feature of the encoder, K is the negative sample queue length;
is P q And I k A comparison loss function calculation value between +.>Is P q Through the j-th dimension characteristic of the encoder, < >>Representation and P q Is opposite to I k Through the j-th dimension characteristic of the encoder, < >>Representation and P q N-th I of negative pair k Through the ith dimension feature, P, of the encoder q Is I q I is a local image of (1) q Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting q Is of the size of (1) to obtain I q Is a local image of (a);
is P q 、P k A comparison loss function calculation value between +.>Is P q Through the kth dimension characteristic of the encoder, < >>Representation and P q P is opposite to k Through the kth dimension characteristic of the encoder, < >>Representation and P q N-th P of negative pair k Through the kth dimension characteristic, P, of the encoder k Is I k I is a local image of (1) k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image of (a);
is I q Via the minimum scale feature of the encoder, +.>Representation and I q Is opposite to I k Via the minimum scale feature of the encoder, +.>Representation and I q N-th I of negative pair k Through the smallest scale feature of the encoder.
The rules for converting the trained self-supervision model into the defect detection model are as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle;
the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key name matching pair, obtaining a key name matching pair, and matching k in the key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode;
k d and k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
The loss function of the defect detection model is:
L s =BCE(obj y ,obj s )+BCE(cls y ,cls s )+IOU(box y ,box s )
wherein L is s For the defect detection model loss function calculation value, BCE represents a BCEWITHLogitsLoss binary cross entropy loss function, obj y Indicating whether the defect detection model predicts the input image to be defective or not, obj s Indicating whether the input image of the defect detection model has defects actually, cls y Representing the defect class predicted by the defect detection model, cls s For the actually existing defect category, IOU represents IOULSO loss function for detecting prediction box y And a real box s Overlapping area between.
A simplified and unified catenary defect detection system, comprising:
the acquisition module is used for acquiring a simplified and unified contact net image;
the detection module inputs the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set when the self-supervision model is trained, multi-scale feature extraction is carried out on the simplified and unified contact network image, and the training is carried out by utilizing multi-scale feature contrast loss and dense contrast loss according to the extracted multi-scale features.
In the detection module, the simplified and unified contact network image in the training set is enhanced by adopting a mixed cutting mode, and the method comprises the following steps:
in the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
and updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct the position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set.
In the detection module, the loss function of the self-supervision model is as follows:
wherein L is a self-supervision model loss function calculated value, N M For the number of multi-scale feature contrast losses, N L L is the total loss number mfc Calculating a value for a multi-scale feature contrast loss function, L dc For global image I q 、I k Intensive contrast loss function calculation value between, global image I q 、I k Different views generated by the same simplified contact net image in a mixed cutting mode are obtained;
W i weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k Weighting the loss for the j-th scale feature contrast, N m The number of the multi-scale features;
is I q 、I k A comparison loss function calculation value between +.>Is I q Through the ith dimension feature of the encoder, < >>Representation and I q Is opposite to I k Through the ith dimension feature of the encoder, T is the temperature super-parameter, +.>Is equal to I q N-th I of negative pair k Through the ith dimension feature of the encoder, K is the negative sample queue length;
is P q And I k A comparison loss function calculation value between +.>Is P q Through the j-th dimension characteristic of the encoder, < >>Representation and P q Is opposite to I k Through the j-th dimension characteristic of the encoder, < >>Representation and P q N-th I of negative pair k Through the ith dimension feature, P, of the encoder q Is I q I is a local image of (1) q Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting q Is of the size of (1) to obtain I q Is a local image of (a);
is P q 、P k A comparison loss function calculation value between +.>Is P q Through the kth dimension characteristic of the encoder, < >>Representation and P q P is opposite to k Through the kth dimension characteristic of the encoder, < >>Representation and P q N-th P of negative pair k Through the kth dimension characteristic, P, of the encoder k Is I k I is a local image of (1) k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image of (a);
is I q Via the minimum scale feature of the encoder, +.>Representation and I q Is opposite to I k Via the minimum scale feature of the encoder, +.>Representation and I q N-th I of negative pair k Through the smallest scale feature of the encoder.
In the detection module, the rule for converting the trained self-supervision model into the defect detection model is as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle;
the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key numbers to obtain key name matching pairsK in key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode;
k d and k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a simplified and generalized catenary defect detection method.
The invention has the beneficial effects that: the invention adopts a mixed cutting mode to amplify the simplified and unified contact network image, carries out multi-scale feature extraction on the simplified and unified contact network image, trains an upstream self-supervision model by utilizing multi-scale feature contrast loss and dense contrast loss, converts the trained self-supervision model into a defect detection model to be trained at the downstream, detects the simplified and unified contact network image to be detected by adopting the trained defect detection model, can obtain a high-precision detection result, and is suitable for the conditions of lack of target samples and more defect categories.
Drawings
Fig. 1 is a schematic diagram of a simplified catenary defect detection method.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
A simplified contact net defect detection method comprises the following steps:
step 1, acquiring a simplified and unified contact net image;
step 2, inputting the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set when the self-supervision model is trained, multi-scale feature extraction is carried out on the simplified and unified contact network image, and the training is carried out by utilizing multi-scale feature contrast loss and dense contrast loss according to the extracted multi-scale features.
The method comprises the steps of adopting a mixed cutting mode to amplify a simplified and unified contact network image, carrying out multi-scale feature extraction on the simplified and unified contact network image, training an upstream self-supervision model by utilizing multi-scale feature contrast loss and dense contrast loss, converting the trained self-supervision model into a downstream defect detection model to be trained, and detecting the simplified and unified contact network image to be detected by adopting the trained defect detection model, so that a high-precision detection result can be obtained; and because the image augmentation is carried out by the method, the method is suitable for the condition that the target sample is lacking, the multi-scale feature extraction module has a large framework, the feature information contained in the image is extracted more completely, the more complex detection problem can be solved, and the method is suitable for the condition that the defect types are more.
In the method, the self-supervision model is a multi-scale dense contrast enhancement self-supervision network, the defect detection model is a target detection network compatible with an upstream self-supervision network, the self-supervision model is required to be trained firstly, then the trained self-supervision model is subjected to transfer learning to obtain a defect detection model to be trained at the downstream, the defect detection model is trained, and the trained defect detection model is adopted to carry out simplified contact net defect detection.
As shown in fig. 1, the self-supervision model is trained by adopting a simplified and unified overhead line system conventional image, the simplified and unified overhead line system conventional image is a label-free image, features are extracted by adopting PCA principal component analysis according to feature similarity of the images, aggregation is carried out by a K-means method, the images are classified according to the aggregation degree, and the images are classified into a training set and a verification set according to a preset proportion.
The simplified and unified contact net image in the training set is enhanced by utilizing a mixed cutting mode in the training, and the method is specifically as follows:
1) In the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
the original image is cut according to random size and random length-width ratio, then the picture is adjusted to the set pixel size (800 ), and the brightness, contrast, gray level and turnover degree of the image are adjusted according to a certain probability to be used as the augmented image of the original image.
2) Updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set;
in the preset iteration times, the position of an object in an original image is initially learned through a random clipping method, the position is taken as the center outside the preset iteration times, the sensitivity is added to a central neighborhood through histogram change, the higher sensitivity is added to the neighborhood which tends to be consistent with a central pixel, otherwise, the lower sensitivity is given, a clipping frame is biased to the neighborhood side with the center and the higher sensitivity for clipping, then the image is adjusted to the set pixel size (800 ), and the brightness, the contrast, the gray level and the turnover degree of the image are adjusted according to a certain probability to serve as an enlarged image of the original image.
The same image can generate two global images I after being amplified by a mixed clipping mode q 、I k ,I q 、I k For two different views, I will be q 、I k As input, performing multiple scale feature extraction to obtain I q 、I k The contrast loss function between the two can be processed as follows:
a1 I) will be I q 、I k Fed into encoder f q Respectively obtain N m The method comprises the steps of (1) sending the feature images with the different scales into multi-layer perceptron (MLP) which are not shared by parameters, and splicing the output of each MLP according to the dimension.
A2 I) to be different images q 、I k Is a multi-scale feature of (2)Into a negative sample queue, key encoder f k Parameter initialization time and f q The parameters of (2) remain consistent and the parameters do not participate in back propagation; wherein the key encoder is operated for consistency, i.e. f is desired q Can be from similar or identical f k Sample characteristics are obtained, and the effectiveness of comparison is ensured.
A3)As query representation +.>As->Similar sample representation, ++>As->Different sample representations, if I q 、I k From the same image, then it is indicated as positive, otherwise it is indicated as negative.
To ensure thatAnd->The similarity between them is as large as possible, but with other +.>The similarity between the two images is as small as possible, and the global image I of the same image q 、I k The contrast loss function between is:
wherein,,is I q 、I k A comparison loss function calculation value between +.>Is I q Through encoder f q Is the ith dimension characteristic of>Representation and I q Is opposite to I k Through encoder f q Is the ith dimension characteristic of>Is equal to I q N-th I of negative pair k Through encoder f q Is the i-th dimension characteristic of K is the negative sample queue length, N m For the number of multi-scale features, T is a temperature super parameter, so that negative samples with higher similarity are far away, and the obtained representation space is more uniform.
Will I q 、I k As input, partial images P are obtained by means of tile enhancement, respectively q 、P k The method specifically comprises the following steps: i k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image P of (2) k ;I q Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting q Is of the size of (1) to obtain I q Is a local image P of (2) q . Further extracting multiple scale features to obtain P q 、P k The contrast loss function between the two can be processed as follows:
b1 I) will be I q 、I k Splitting an image into a plurality of partial images in a jigsaw manner, and feeding the partial images into an encoder f q Respectively obtain N m Feature map of=3 different scales, different scalesThe feature map of the degree is sent into multi-layer perceptron MLP which are not shared with each other, and the output of each MLP is spliced according to the dimension.
B2 P) to be different pictures q 、I k Is a multi-scale feature of (2)Into a negative sample queue, key encoder f k Parameter initialization time and f q Is kept consistent and its parameters do not participate in the back propagation.
B3)As query representation +.>As->Similar sample representation, ++>As->Different sample representations, if P q 、I k From the same image, then it is indicated as positive, otherwise it is indicated as negative.
To ensure thatAnd->The similarity between them is as large as possible, but with other +.>The similarity between the two images is as small as possible, and the partial image P of the same image q With global image I k The contrast loss function between is:
wherein,,is P q And I k A comparison loss function calculation value between +.>Is P q Through encoder f q Is the j-th dimensional feature of>Representation and P q Is opposite to I k Through encoder f q Is the j-th dimensional feature of>Representation and P q N-th I of negative pair k Through encoder f q Is the i-th dimensional feature of (a).
B4 P) of the same image q 、P k Is a multi-scale feature of (2) Put into the encoder f q In which P of different pictures q 、P k Multi-scale features of->Into a negative sample queue, key encoder f k Parameter initialization time and f q Is kept consistent and its parameters do not participate in the back propagation.
B5)f q In (a) and (b)As query representation +.>As->Similar sample representation, ++>As->Different sample representations, if P q 、P k From the same image, then it is indicated as positive, otherwise it is indicated as negative.
To ensure thatAnd->The similarity between them is as large as possible, but with other +.>The similarity between the two images is as small as possible, and the partial image P of the same image q 、P k The contrast loss function between is:
wherein,,is P q 、P k A comparison loss function calculation value between +.>Is P q Through encoder f q Is the kth dimensional feature of>Representation and P q P is opposite to k Through encoder f q Is the kth dimensional feature of>Representation and P q N-th P of negative pair k Through encoder f q Is the kth dimensional feature of (c).
Will I q 、I k As input, extracting multi-scale features, and sending the feature with minimum scale, namely the general feature map, into Dense perceptron Dense MLP to obtain global image I q 、I k The dense contrast loss function between the model and the model is combined with the multi-scale characteristic contrast loss function to obtain the loss function of the contrast loss module, and the loss function is used as the total loss function of the model, and the self-supervision model for transfer learning is obtained after iteration, and the process can be as follows:
c1 I) will be I q 、I k Respectively sent to the encoder f q Respectively obtain N m The feature map with the minimum scale is sent to the dense sensor, and the dense sensor adopts a convolution layer to replace a linear layer to output dense feature information.
C2 Will) bePut-in key encoder f k Key encoder f k Parameter initialization direct replication f q And its parameters do not participate in the back propagation.
C3)As query representation +.>As->Similar sample tableShow (I)>As->Different sample representations, if +.>Features from the same image are represented as positive pairs, otherwise as negative pairs.
To ensure thatAnd->The similarity between them is as large as possible, but with other +.>The similarity between the two images is as small as possible, I of the same image q 、I k The dense contrast loss function between is:
wherein L is dc For global image I q 、I k The dense contrast loss function between them calculates a value,is I q Through encoder f q Minimum dimensional features of->Representation and I q Is opposite to I k Through encoder f q Minimum dimensional features of->Representation and I q N-th I of negative pair k Through encoder f q Is a minimum scale feature of (2).
When the image in the batch enters f q Then, the negative-sample multi-scale characteristics of the image are sent to the tail of a negative-sample queue, when the length of the negative-sample queue reaches the upper limit, the head of the queue is removed, and more negative samples are read in a queue updating mode; f (f) k The parameter is f q Momentum update, update formula is:
m(1-θ q )+mθ k
wherein m is a dynamic value, θ q Is f q Network structure parameters, θ k Is f k Network structure parameters of (a).
The encoder is a specific network architecture of the multi-scale feature extraction module, and consists of a 53-layer backbone network, 3 multi-layer perceptron MLPs and a Dense perceptron Dense MLP, wherein the encoder f q Key encoder f k Is consistent with the architecture of f k The parameter is f q According to m (1-theta) q )+mθ k The momentum of the data is updated and,and->Is different from the encoder of the encoder in that only the passing sensor is different.
C4 Combining the multi-scale characteristic contrast loss function and the dense contrast loss function to obtain a total loss function, and obtaining a self-supervision model for transfer learning after iteration.
The loss function of the self-supervision model is:
wherein L is a self-supervision model loss function calculated value, N M =9 is the number of multiscale feature contrast losses, N L =10 is the total loss number, L mfc Calculating a value, W, for a multi-scale feature contrast loss function i Weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k The lost weighting weights are compared for the j-th scale feature.
Performing migration learning by using a trained upstream self-supervision model, converting the model into a downstream defect detection model to be trained, wherein the conversion rule between the model and the defect detection model is as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle; the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key name matching pair, obtaining a key name matching pair, and matching k in the key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode; k (k) d And k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
Specifically, k is u ∈C uk In the layer structure name starting with 'encoder_k', k is removed u ∈C uk The layer structure name containing 'mlp' is removed and k is removed u ∈C uk Layer structure name and k starting from' encoder_q d ∈C dk The layer structure names in (a) are subjected to full character matching, and the matched k is obtained u Change to k d And let k u Corresponding v u And storing the key value pairs into a pre-training model, and skipping if the key value pairs are not matched.
The defect detection model is trained by adopting a simplified and unified contact net defect image, and the defect detection model is specifically as follows:
d1 A training set and a verification set for simplifying and unifying defects of the tube caps, insulator dirt, backup and parent defects of the overhead contact system are manufactured.
D2 After the defect image is adjusted, the defect image is sent to a defect detection module, and a defect detection model is trained through iteration.
The loss function of the defect detection model is:
L s =BCE(obj y ,obj s )+BCE(cls y ,cls s )+IOU(box y ,box s )
wherein L is s For the defect detection model loss function calculation value, BCE represents a BCEWITHLogitsLoss binary cross entropy loss function, obj y Indicating whether the defect detection model predicts the input image to be defective or not, obj s Indicating whether the input image of the defect detection model has defects actually, cls y Representing the defect class predicted by the defect detection model, cls s For the actually existing defect category, IOU represents IOULSO loss function for detecting prediction box y And a real box s Overlapping area between.
And after training is finished, defect detection can be directly carried out, a simplified and unified contact network image is obtained, the obtained simplified and unified contact network image is input into a trained defect detection model, a simplified and unified contact network defect detection result is obtained, and a defect type, a confidence coefficient and a prediction frame are specifically obtained.
The method can obtain a high-precision detection result and is suitable for the conditions of lack of target samples and more defect types.
Based on the same technical scheme, the invention also discloses a software system of the method, a simplified and unified contact net defect detection system, which comprises:
and the acquisition module acquires a simplified and unified contact network image.
The detection module inputs the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set when the self-supervision model is trained, multi-scale feature extraction is carried out on the simplified and unified contact network image, and the training is carried out by utilizing multi-scale feature contrast loss and dense contrast loss according to the extracted multi-scale features.
In the detection module, the simplified and unified contact network image in the training set is enhanced by adopting a mixed cutting mode, and the method comprises the following steps:
in the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
and updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct the position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set.
In the detection module, the loss function of the self-supervision model is as follows:
wherein L is a self-supervision model loss function calculated value, N M For the number of multi-scale feature contrast losses, N L L is the total loss number mfc Calculating a value for a multi-scale feature contrast loss function, L dc For global image I q 、I k Intensive contrast loss function calculation value between, global image I q 、I k Different views generated by the same simplified contact net image in a mixed cutting mode are obtained;
W i weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k Weighting the loss for the j-th scale feature contrast, N m The number of the multi-scale features;
is I q 、I k A comparison loss function calculation value between +.>Is I q Through the ith dimension feature of the encoder, < >>Representation and I q Is opposite to I k Through the ith dimension feature of the encoder, T is the temperature super-parameter, +.>Is equal to I q N-th I of negative pair k Through the ith dimension feature of the encoder, K is the negative sample queue length;
is P q And I k A comparison loss function calculation value between +.>Is P q Through the j-th dimension characteristic of the encoder, < >>Representation and P q Is opposite to I k Through the j-th dimension characteristic of the encoder, < >>Representation and P q N-th I of negative pair k Through the ith dimension feature, P, of the encoder q Is I q I is a local image of (1) q Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting q Is of the size of (1) to obtain I q Is a local image of (a);
is P q 、P k A comparison loss function calculation value between +.>Is P q Through the kth dimension characteristic of the encoder, < >>Representation and P q P is opposite to k Through the kth dimension characteristic of the encoder, < >>Representation and P q N-th P of negative pair k Through the kth dimension characteristic, P, of the encoder k Is I k I is a local image of (1) k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image of (a);
is I q Via the minimum scale feature of the encoder, +.>Representation and I q Is opposite to I k Via the minimum scale feature of the encoder, +.>Representation and I q N-th I of negative pair k Through the smallest scale feature of the encoder. Inspection and detectionIn the testing module, the rule for converting the trained self-supervision model into the defect detection model is as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle;
the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key name matching pair, obtaining a key name matching pair, and matching k in the key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode;
k d and k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
The data processing flow of each module of the system is consistent with the corresponding steps of the method, and the description is not repeated here.
Based on the same technical solution, the present invention also discloses a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform a simplified and unified catenary defect detection method.
Based on the same technical scheme, the invention also discloses a computing device, which comprises one or more processors, one or more memories and one or more programs, wherein the one or more programs are stored in the one or more memories and are configured to be executed by the one or more processors, and the one or more programs comprise instructions for executing the simplified catenary defect detection method.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as providing for the use of additional embodiments and advantages of all such modifications, equivalents, improvements and similar to the present invention are intended to be included within the scope of the present invention as defined by the appended claims.
Claims (10)
1. The simplified contact net defect detection method is characterized by comprising the following steps of:
acquiring a simplified and unified contact net image;
inputting the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, and when the self-supervision model is trained, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set, multi-scale feature extraction is carried out on the simplified and unified contact network image, and multi-scale feature contrast loss and dense contrast loss are utilized for training.
2. The simplified and unified catenary defect detection method of claim 1, wherein the simplified and unified catenary image in the training set is augmented by adopting a hybrid clipping mode, comprising:
in the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
and updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct the position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set.
3. The simplified and unified catenary defect detection method of claim 1, wherein the loss function of the self-supervision model is:
wherein L is a self-supervision model loss function calculated value, N M For the number of multi-scale feature contrast losses, N L L is the total loss number mfc Calculating a value for a multi-scale feature contrast loss function, L dc For global image I q 、I k Intensive contrast loss function calculation value between, global image I q 、I k Different views generated by the same simplified contact net image in a mixed cutting mode are obtained;
W i weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k Weighting the loss for the j-th scale feature contrast, N m The number of the multi-scale features;
is I q 、I k A comparison loss function calculation value between +.>Is I q Through the ith dimension feature of the encoder, < >>Representation and I q Is opposite to I k Through the ith dimension feature of the encoder, T is the temperature super-parameter, +.>Is equal to I q N-th I of negative pair k Through the ith dimension feature of the encoder, K is the negative sampleA queue length;
is P q And I k A comparison loss function calculation value between +.>Is P q Through the j-th dimension characteristic of the encoder, < >>Representation and P q Is opposite to I k Through the j-th dimension characteristic of the encoder, < >>Representation and P q N-th I of negative pair k Through the ith dimension feature, P, of the encoder q Is I q I is a local image of (1) q Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting q Is of the size of (1) to obtain I q Is a local image of (a);
is P q 、P k A comparison loss function calculation value between +.>Is P q Through the kth dimension characteristic of the encoder, < >>Representation and P q P is opposite to k Through the kth dimension characteristic of the encoder, < >>Representation and P q N-th P of negative pair k Through the kth dimension characteristic, P, of the encoder k Is I k I is a local image of (1) k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image of (a);
is I q Via the minimum scale feature of the encoder, +.>Representation and I q Is opposite to I k Via the minimum scale feature of the encoder, +.>Representation and I q N-th I of negative pair k Through the smallest scale feature of the encoder.
4. The simplified and unified catenary defect detection method of claim 1, wherein the rules for converting the trained self-supervision model into the defect detection model are as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle;
the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key name matching pair, obtaining a key name matching pair, and matching k in the key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode;
k d and k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
5. The simplified and unified catenary defect detection method of claim 1, wherein the defect detection model has a loss function of:
L s =BCE(obj y ,obj s )+BCE(cls y ,cls s )+IOU(box y ,box s )
wherein L is s For the defect detection model loss function calculation value, BCE represents a BCEWITHLogitsLoss binary cross entropy loss function, obj y Indicating whether the defect detection model predicts the input image to be defective or not, obj s Indicating whether the input image of the defect detection model has defects actually, cls y Representing the defect class predicted by the defect detection model, cls s For the actually existing defect category, IOU represents IOULSO loss function for detecting prediction box y And a real box s Overlapping area between.
6. A simplified and unified catenary defect detection system, comprising:
the acquisition module is used for acquiring a simplified and unified contact net image;
the detection module inputs the simplified and unified contact net image into a pre-trained defect detection model to obtain a simplified and unified contact net defect detection result; the defect detection model is formed by converting a trained self-supervision model, and when the self-supervision model is trained, a mixed cutting mode is adopted to amplify a simplified and unified contact network image in a training set, multi-scale feature extraction is carried out on the simplified and unified contact network image, and multi-scale feature contrast loss and dense contrast loss are utilized for training.
7. The simplified and unified catenary defect detection system of claim 6, wherein the simplified and unified catenary image in the training set is augmented by hybrid clipping in the detection module, comprising:
in the preset iteration times, cutting out simplified and unified contact network images in the training set by adopting a random cutting method, and amplifying the simplified and unified contact network images in the training set by using the cut-out images;
and updating parameters of the self-supervision model according to the weight of the self-supervision model outside the preset iteration times, adopting a comparison cutting method to correct the position judgment of the cutting frame, cutting the simplified and unified contact network image in the training set according to the cutting frame after the position correction, and using the cut image to augment the simplified and unified contact network image in the training set.
8. The simplified and unified catenary defect detection system of claim 6, wherein in the detection module, the loss function of the self-supervision model is:
wherein L is a self-supervision model loss function calculated value, N M For the number of multi-scale feature contrast losses, N L L is the total loss number mfc Calculating a value for a multi-scale feature contrast loss function, L dc For global image I q 、I k Intensive contrast loss function calculation value between, global image I q 、I k Different views generated by the same simplified contact net image in a mixed cutting mode are obtained;
W i weight for i-th scale feature contrast loss, W j Weight for j-th scale feature contrast loss, W k Weighting the loss for the j-th scale feature contrast, N m The number of the multi-scale features;
is I q 、I k A comparison loss function calculation value between +.>Is I q Through the ith dimension feature of the encoder, < >>Representation and I q Is opposite to I k Through the ith dimension feature of the encoder, T is the temperature super-parameter, +.>Is equal to I q N-th I of negative pair k Through the ith dimension feature of the encoder, K is the negative sample queue length;
is P q And I k A comparison loss function calculation value between +.>Is P q Through the j-th dimension characteristic of the encoder, < >>Representation and P q Is opposite to I k Through the j-th dimension characteristic of the encoder, < >>Representation and P q N-th I of negative pair k Through the ith dimension feature, P, of the encoder q Is I q I is a local image of (1) q Disassembling according to jigsaw modeDividing into a plurality of image blocks, randomly cutting each image block, and adjusting the image block into I q Is of the size of (1) to obtain I q Is a local image of (a);
is P q 、P k A comparison loss function calculation value between +.>Is P q Through the kth dimension characteristic of the encoder, < >>Representation and P q P is opposite to k Through the kth dimension characteristic of the encoder, < >>Representation and P q N-th P of negative pair k Through the kth dimension characteristic, P, of the encoder k Is I k I is a local image of (1) k Splitting into a plurality of image blocks according to a jigsaw mode, and adjusting each image block into I after random cutting k Is of the size of (1) to obtain I k Is a local image of (a);
is I q Via the minimum scale feature of the encoder, +.>Representation and I q Is opposite to I k Via the minimum scale feature of the encoder, +.>Representation and I q N-th I of negative pair k Through the smallest scale feature of the encoder.
9. The simplified and unified catenary defect detection system of claim 6, wherein in the detection module, the rule for converting the trained self-supervision model into the defect detection model is as follows:
the key set and the value set of the self-supervision model and the defect detection model after training are converted according to the fuzzy matching principle;
the fuzzy matching principle is as follows: key set C of defect detection model dk As a standard key set, a trained self-supervision model key set C uk As a set of matching keys, remove C uk The initial structure name of (C) uk The remaining key names and C dk Performing full character matching on key names in the key name matching pair, obtaining a key name matching pair, and matching k in the key name matching pair d And k u Corresponding v u Storing the defect detection model in a key value pair mode;
k d and k u Form key name matching pairs, k d Is C dk Key name, k of (a) u Is C uk Key name, v of (v) u Is a value in the trained self-supervising model value set.
10. A computer readable storage medium storing one or more programs, wherein the one or more programs comprise instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310430316.3A CN116452549A (en) | 2023-04-21 | 2023-04-21 | Simplified and unified contact net defect detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310430316.3A CN116452549A (en) | 2023-04-21 | 2023-04-21 | Simplified and unified contact net defect detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116452549A true CN116452549A (en) | 2023-07-18 |
Family
ID=87128435
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310430316.3A Pending CN116452549A (en) | 2023-04-21 | 2023-04-21 | Simplified and unified contact net defect detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116452549A (en) |
-
2023
- 2023-04-21 CN CN202310430316.3A patent/CN116452549A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
He et al. | Defect detection of hot rolled steels with a new object detection framework called classification priority network | |
CN111861978A (en) | Bridge crack example segmentation method based on Faster R-CNN | |
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
EP3767551A1 (en) | Inspection system, image recognition system, recognition system, discriminator generation system, and learning data generation device | |
CN110648310A (en) | Weak supervision casting defect identification method based on attention mechanism | |
JP2022027473A5 (en) | ||
CN113591948B (en) | Defect pattern recognition method and device, electronic equipment and storage medium | |
CN115439458A (en) | Industrial image defect target detection algorithm based on depth map attention | |
CN110599459A (en) | Underground pipe network risk assessment cloud system based on deep learning | |
CN111461121A (en) | Electric meter number identification method based on YO L OV3 network | |
CN112613428B (en) | Resnet-3D convolution cattle video target detection method based on balance loss | |
CN115019133B (en) | Method and system for detecting weak target in image based on self-training and tag anti-noise | |
CN113052103A (en) | Electrical equipment defect detection method and device based on neural network | |
CN115170816A (en) | Multi-scale feature extraction system and method and fan blade defect detection method | |
CN113313678A (en) | Automatic sperm morphology analysis method based on multi-scale feature fusion | |
CN115423796A (en) | Chip defect detection method and system based on TensorRT accelerated reasoning | |
CN116030050A (en) | On-line detection and segmentation method for surface defects of fan based on unmanned aerial vehicle and deep learning | |
CN116681961A (en) | Weak supervision target detection method based on semi-supervision method and noise processing | |
Diaz et al. | Fast detection of wind turbine blade damage using cascade mask r-dscnn-aided drone inspection analysis | |
CN116994161A (en) | Insulator defect detection method based on improved YOLOv5 | |
Artan et al. | Car damage analysis for insurance market using convolutional neural networks | |
CN116452549A (en) | Simplified and unified contact net defect detection method and system | |
CN116310596A (en) | Domain adaptation-based small sample target detection method for electric power instrument | |
CN116071544A (en) | Image description prediction method oriented to weak supervision directional visual understanding | |
CN113066049B (en) | MEMS sensor defect type identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |