CN111951212A - Method for identifying defects of contact network image of railway - Google Patents
Method for identifying defects of contact network image of railway Download PDFInfo
- Publication number
- CN111951212A CN111951212A CN202010269412.0A CN202010269412A CN111951212A CN 111951212 A CN111951212 A CN 111951212A CN 202010269412 A CN202010269412 A CN 202010269412A CN 111951212 A CN111951212 A CN 111951212A
- Authority
- CN
- China
- Prior art keywords
- image data
- image
- feature
- cnn model
- defect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 86
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 69
- 238000012549 training Methods 0.000 claims abstract description 44
- 230000004927 fusion Effects 0.000 claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000010586 diagram Methods 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 49
- 230000006870 function Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for identifying defects of an image of a contact network of a railway. The method comprises the following steps: photographing a railway contact net to obtain an image data set of the railway contact net, and performing expansion processing on the image data set; constructing an improved Faster R-CNN model based on image pyramid feature fusion, and training the improved Faster R-CNN model by using an image data set after expansion processing to obtain a trained improved Faster R-CNN model; and carrying out defect target identification on the image data of the contact network to be identified by using the trained Faster R-CNN model through a target detection algorithm to obtain a defect target in the image data. The improved Faster R-CNN algorithm is applied to the defect detection of the image data of the railway overhead line system, so that the automatic identification of the defects of the overhead line system components is realized, and the detection efficiency is improved. Utilize unmanned aerial vehicle to shoot the railway contact net, can acquire the contact net remote sensing image that image quality is better under the condition that does not influence the circuit operation, can avoid the contact net to shelter from the data that cause each other incomplete.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a method for identifying defects of an image of a contact network of a railway.
Background
The contact net is the key equipment for ensuring the safe operation of the train. Due to repeated sloshing and vibration in train operation, parts of the contact net are easy to damage and even lose. At present, the detection means of the contact network mainly comprises that a large amount of image data are read by manual offline. However, with the large-scale construction of high-speed electrified railways, the number of photos detected by manual visual inspection is huge, and the detection efficiency is low. Different cameras mounted on the detection vehicles are usually shot at night, and the obtained images are poor in quality and have the phenomenon of omission. Shoot the railway contact net through unmanned aerial vehicle, obtain the contact net image that the high quality can avoid shooting the missed measure that the angle caused. The automatic detection of the defects of the overhead line system is realized by utilizing the deep learning target detection algorithm, and the method has important significance for improving the accuracy and efficiency of the detection of the overhead line system.
With the development of image processing technology in recent years, some researchers propose a detection means based on traditional images to detect defects of image data of a railway overhead line system, and although the appearance defects of the railway overhead line system can be detected to a certain extent, the algorithm is greatly influenced by the surrounding environment and does not meet ideal requirements on precision and speed.
Disclosure of Invention
The embodiment of the invention provides a method for identifying defects of an image of a contact network of a railway, which aims to overcome the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme.
A method of defect identification of an image of a catenary of a railway, comprising:
photographing a railway contact network to obtain an image data set of the railway contact network, and performing expansion processing on the image data set;
constructing an improved Faster R-CNN model based on image pyramid feature fusion, and training the improved Faster R-CNN model by using an image data set after expansion processing to obtain a trained improved Faster R-CNN model;
and carrying out defect target identification on the image data of the contact network to be identified by using the trained Faster R-CNN model through a target detection algorithm, and acquiring a defect target in the image data of the contact network to be identified.
Preferably, the photographing of the railway contact network, obtaining an image data set of the railway contact network, and performing the expansion processing on the image data set includes:
shooting a railway contact network by using a mode that an unmanned aerial vehicle carries a camera, acquiring an image data set of the railway contact network, and performing image expansion processing on the image data set, wherein the image expansion processing comprises at least one of adjusting the contrast, brightness, mirror image, rotating the image angle and increasing noise of an image, and the image data set after the expansion processing is divided into a training set, a verification set and a test set;
the image data in the training set is uniformly adjusted to 1000X1500 pixels, defects in the image data in the training set are marked out by rectangular frames by using labelImage software, rectangular frame image data including image names, defect coordinates and defect types are generated, a plurality of anchor frame image data are generated by a kmean algorithm according to the rectangular frame image data, and the size and the length-width ratio of an anchor frame are set, wherein the anchor frame is used for predicting defect position information in the image data.
Preferably, the size of the anchor frame is 32 × 32, 64 × 64, 128 × 128, 256 × 256, 512 × 512, and the length-width ratio is 1:2, 1:1, 2:1, respectively.
Preferably, the constructing of the improved Faster R-CNN model based on the image pyramid feature fusion, and the training of the improved Faster R-CNN model by using the extended image data set to obtain the trained improved Faster R-CNN model, includes:
constructing an improved Faster R-CNN model based on image pyramid feature fusion, inputting image data in the training set into a ResNet101 network by taking ResNet101 as a feature extraction network in the improved Faster R-CNN model, and obtaining anchor frame data generated by a feature fusion diagram according to the two feature fusion diagrams and the size and the length-width ratio of the set anchor frame by utilizing two feature fusion diagrams from top to bottom and from bottom to top of the image data extracted by the ResNet101 network;
inputting the anchor frame data into an RPN (resilient packet network), carrying out two-classification processing on the anchor frame data through a loss function by the RPN, screening out a foreground frame serving as a target object, wherein the target object is a defect in image data of a contact network of a railway, and synthesizing coordinate offset of the foreground frame and the anchor frame to obtain an area suggestion frame;
selecting a set number of area suggestion frames according to the confidence coefficient of each area suggestion frame from large to small, removing redundant detection of the area suggestion frames through an NMS non-maximum suppression algorithm, reserving the set number of area suggestion frames as detection frames, performing pooling operation on all the detection frames by using a Tensorflow frame to obtain a 7 x 7 feature map of the detection frames, inputting the feature map into a full-connection layer to synthesize all scale features, outputting a feature map with a dimensionality of 1024, obtaining the probability of each defect type through a first full-connection layer according to the feature map with the dimensionality of 1024, and realizing position adjustment of the detection frames of the defects through a second full-connection layer;
the improved Faster R-CNN model is obtained through the above operation. And training the improved Faster R-CNN model by using the image data in the training set, comparing the training result with the result of the image data in the verification set, and adjusting the parameters of the improved Faster R-CNN model to obtain the trained improved Faster R-CNN model.
Preferably, the inputting of the image data in the training set into the ResNet101 network, and the top-down and bottom-up feature fusion maps of the image data extracted by the ResNet101 network include:
inputting the image data in the training set into a ResNet101 network, wherein the ResNet101 is of a multilayer structure, extracting feature maps (C2, C3, C4 and C5) of each layer in residual blocks of which the number is 3, 4, 23 and 3 in the ResNet101 network in sequence, the feature maps with higher layer number have more semantic information and less image detail information, the feature maps with lower layer number have more detail information and less semantic information, forming the feature maps extracted from the topmost layer in the ResNet101 network into a feature map with the same size as the next layer through bilinear interpolation, forming a feature map with the same fusion dimension as the feature map of the upper layer through convolution kernel convolution of 1X1 in the lower layer, and forming new feature maps (P2, P3, P4 and P5) in sequence by superposing the feature maps of the upper layer and the next layer to form a feature fusion path based on a pyramid from top to bottom;
the feature map extracted from the bottommost layer in the ResNet101 network passes through a convolution kernel of 3X3 from bottom to top, a convolution with the step length of 2 forms a feature map with the same scale as that of an upper layer, and new feature maps (N2, N3, N4 and N5) are sequentially formed by overlapping and fusing the feature maps of a next layer and the upper layer to form a feature fusion path from bottom to top based on a feature pyramid;
two feature fusion graphs from top to bottom and from bottom to top are obtained through the above processing procedure, and the dimensions of the two feature fusion graphs are unified to be the same 256.
Preferably, the identifying the defect target of the image data of the catenary to be identified by using the trained fast R-CNN model through a target detection algorithm to obtain the defect target in the image data of the catenary to be identified includes:
using in the improved Faster R-CNN modelAnd the target detection algorithm is used for identifying the defect target of the contact network image data to be subjected to defect identification, and the overlapping rate of two adjacent detection frames is recorded as IOUresultSetting the threshold value to be 0.5 when IOUresult>0.5, judging that the detected catenary image data contain the defect target;
obtaining the type of a prediction frame as a true example TP, a false positive example FP, a true negative example TN or a false negative example FN according to the defect detection result of the contact network image data and the defect actual condition of the railway line corresponding to the image data, calculating the values of the performance evaluation index P accuracy, the R recall rate, the AP precision mean value and the mAP average precision mean value of the improved Faster R-CNN model according to the data of the true positive example TP, the false positive example FP, the true negative example TN or the false negative example, and adjusting the parameters of the improved Faster R-CNN model according to the values of the performance evaluation index P accuracy, the R recall rate, the AP precision mean value and the mAP average precision mean value.
According to the technical scheme provided by the embodiment of the invention, the unmanned aerial vehicle is used for shooting the railway contact network, so that the contact network remote sensing image with better image quality can be obtained under the condition of not influencing the line operation. Because the shooting angle can change and can avoid the contact net to shelter from each other the data that cause incomplete. The improved Faster R-CNN algorithm is applied to the defect detection of the image data of the railway contact network, so that the automatic identification of the defects of the railway contact network components is realized, and the detection efficiency is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a processing flow chart of a method for identifying defects in an image of a catenary of a railway according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a railway catenary shot by an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a model improvement proposed in the embodiment of the present invention;
FIG. 4 is a schematic diagram of an improved anchor frame according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target detection cross-over ratio according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating convergence of a loss curve training process according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
With the advent of the age of big data, convolutional neural networks represent a great advantage for image target detection and recognition. The Faster R-CNN algorithm is a two-stage target detection algorithm based on a convolutional neural network, and the algorithm selects a region suggestion candidate box by extracting image features through the convolutional neural network in a first stage. And the second stage classifies and adjusts the position of the candidate frame. The method integrates the feature extraction, the selection of the suggestion frame, the coordinate regression and the classification into a network, so that the comprehensive performance is greatly improved. The coordinate positioning and classification of the contact net defect component can be realized by applying the Faster R-CNN algorithm to the contact net image.
In order to solve the problems, the embodiment of the invention provides a method for detecting a contact network component based on an improved Faster R-CNN algorithm. Inspired by a feature pyramid network algorithm, a novel feature extraction method based on a two-stage target detection algorithm is provided. By adding feature fusion paths based on the feature pyramid from top to bottom and from bottom to top, the obtained features have high-level semantic information and bottom-level position information, and the effectiveness of the algorithm is verified through experiments. In order to process the influence of different sizes of target data on coordinate regression, the Kmeans algorithm is proposed to be applied to carry out preprocessing on the size and the length-width ratio of an anchor frame. In order to avoid the complexity of the model, the detection speed of the model is increased, and the dimension of the feature map after feature fusion is fixed to 256. And a full-link layer is removed at the head of the Faster R-CNN algorithm, and the output dimension of the full-link layer is set to 1024. The invention realizes the defect detection of the contact net under the condition of small target, has high detection accuracy and obvious use value.
The embodiment of the invention provides a method for inputting an image of an unmanned aerial vehicle contact network into a target detection network, extracting features through a convolution network, completing the task of positioning and classifying defect coordinates, and realizing automatic detection of contact network defects.
The processing flow of the method for identifying the defects of the image of the contact network of the railway provided by the embodiment of the invention is shown in fig. 1, and comprises the following processing steps:
and S1, acquiring an image of the contact network of the railway by the unmanned aerial vehicle-mounted camera.
Fig. 2 is a schematic diagram of a railway catenary shot by an unmanned aerial vehicle according to an embodiment of the invention. An aerial camera Zenmose Z30 (produced by Xinjiang Innovation corporation) installed on an unmanned aerial vehicle matrix 600 in Da Jiang shoots a railway overhead contact system in the air with the distance of 30-70m from the rail surface to obtain image data of the overhead contact system. The flight time of the unmanned aerial vehicle is prolonged by adopting 6 intelligent batteries, an Application Programming Interface (API) control function is built in the unmanned aerial vehicle, a central framework can be expanded, and the maximum takeoff weight is 15.1 kg. In the visual detection task, 30 times optical zoom and 6 times code zoom high-definition 1080P video. Thus DJI matrix 600 is equipped with Zenmose Z30 to provide different optical zoom factors for various vision inspection tasks. And shooting the railway contact net by using a mode that the unmanned aerial vehicle carries the camera, and acquiring a railway contact net image data set.
And step S2, carrying out expansion processing on the contact network image data set of the railway.
And carrying out image expansion processing on the image data of the overhead line system by adjusting the contrast, brightness, mirror image, rotating image angle, increasing noise and the like of the image. And dividing the expanded catenary image data into a training set, a verification set and a test set according to the ratio of 4:3:3, wherein the test set is catenary image data to be subjected to defect identification.
The image data in the training set is uniformly adjusted to 1000X1500 pixels, defects in the image data in the training set are marked out by rectangular frames by using labelImage software, and rectangular frame image data containing image names, defect coordinates and defect types is generated. Then, a plurality of anchor frame image data are generated by the kmeans algorithm from the respective rectangular frame image data. The length-width ratio and the size of the anchor frame are greatly different due to different shooting angles, shooting parameters and defect types, and the size and the length-width ratio of the anchor frame need to be set. And calculating the size and length-width ratio of the prior anchor frame by using a Kmeans algorithm. In order to make the anchor frame cover all the target defects as much as possible, the size of the anchor frame can be 32 × 32, 64 × 64, 128 × 128, 256 × 256 and 512 × 512, the length-width ratio is 0.5, 1 and 2, and the defect positions are predicted by 15 anchor frames in total, and the expanded contact network image training set is input into the detection network. The specific flow of the Kmeans algorithm is as follows:
the algorithm flow for solving the anchor frame through the kmeans algorithm is as follows
And S3, constructing an improved Faster R-CNN model based on image pyramid feature fusion, and training the improved Faster R-CNN model to obtain the trained improved Faster R-CNN model.
FIG. 3 is a block diagram of an improved Faster R-CNN model according to an embodiment of the present invention. The embodiment of the invention constructs an improved Faster R-CNN model, in the improved Faster R-CNN model, feature graphs of different layers of convolutional neural networks are fused by using a feature fusion technology, and a region extraction network RPN performs region extraction based on an improved anchor frame.
The improved Faster R-CNN model is constructed by the following steps: ResNet101 is used as a feature extraction network, image data in the training set is input into the ResNet101 network, ResNet101 is of a multilayer structure, feature maps (C2, C3, C4 and C5) of each layer are sequentially extracted in the ResNet101 network in residual blocks of which the number is 3, 4, 23 and 3, the feature maps with the higher number of layers have more semantic information but less image detail information as the number of the network layers increases, and the feature maps with the lower number of layers have more detail information but less semantic information. Where detailed information is important for small object detection. The feature map extracted from the top layer in the ResNet101 network is formed into a feature map with the same size as the next layer through bilinear interpolation, the feature map of the next layer is formed into the same dimension as the feature map of the upper layer through convolution kernel convolution of 1X1, and new feature maps (P2, P3, P4 and P5) are sequentially formed by overlapping and fusing the feature maps of the upper layer and the next layer, so that a feature fusion path based on the feature pyramid from top to bottom is formed. The bottom layer feature graph has more semantic information through the top-down feature fusion path. Then, the feature map extracted from the lowest layer in the ResNet101 network passes through a convolution kernel of 3X3 from bottom to top, the convolution with the step size of 2 forms a feature map with the same scale as that of the upper layer, and new feature maps (N2, N3, N4 and N5) are sequentially formed by overlapping and fusing the feature maps of the next layer and the upper layer, so that a feature pyramid-based feature fusion path from bottom to top is formed. The feature map of the top layer has more detail information through the bottom-up feature fusion path. The feature map dimensions after feature fusion are unified again to the same size of 256.
The above processing procedure obtains two feature fusion graphs from top to bottom and from bottom to top. The bottom layer feature graph has more semantic information through the top-down feature fusion path. The feature map of the top layer has more detail information through the bottom-up feature fusion path.
And obtaining anchor frame data generated by the feature fusion diagram according to the two feature fusion diagrams extracted from the ResNet101 network from top to bottom and from bottom to top and the size and the length-width ratio of the set anchor frame, and obtaining the anchor frame data generated by the feature fusion diagram according to the two feature fusion diagrams extracted from the ResNet101 network from top to bottom and from bottom to top and the size and the length-width ratio of the set anchor frame.
Fig. 4 is a schematic diagram of an improved anchor frame according to an embodiment of the present invention. The RPN Network (Region proposed Network) is a two-class Network in the target detection Network, which classifies a target object and a non-target object into two classes. An IOU (intersection ratio) between the anchor frame and the rectangular frame for defect labeling of more than 0.7 is a positive sample, and an IOU between the anchor frame and the rectangular frame for defect labeling of less than 0.3 is a negative sample. Neither negative nor positive samples are involved in RPN training. Before training the RPN network, for each anchor block, whether it is a binary label of the target object and the background is specified, and if the anchor block is a positive sample, it is marked as 1, otherwise it is marked as 0. The loss function for the RPN network is defined as:
i is the index of each anchor box in a batch, piIs the probability that the I-th anchor frame is defective, if the anchor frame is a positive sampleIs 1, if the anchor frame is a negative sampleIs 0. t is tiIs the 4 coordinate values of the prediction anchor frame,is 4 coordinate values of the positive sample. L isclsIs a binary loss function, LregIs a regression loss function, and λ is a scaling factor of the classification loss function to the regression loss function set to 1.
The RPN loss function includes a two-class loss function and a coordinate regression loss function. The two loss functions realize the separation of the target object and the non-target object in the image, and provide a foreground target for the subsequent classification and coordinate fine adjustment of the target object. And inputting the anchor frame data into an RPN (resilient packet network), and carrying out classification processing on the anchor frame data by the RPN through the loss function to screen out a foreground frame serving as a target object, wherein the target object is a defect in catenary image data of a railway. And then, synthesizing coordinate offsets of the foreground frame and the anchor frame data to obtain an area suggestion frame.
Each area suggestion box has a confidence coefficient for judging whether the object is a target object, 12000 area suggestion boxes are selected according to the confidence coefficient from large to small, redundant detection is removed on the area suggestion boxes through an NMS (NON-MAXIMUM SUPPRESSION) algorithm, and the MAXIMUM number of 2000 area suggestion boxes is reserved as a detection box. The NMS algorithm is as follows:
pooling was performed on all detection frames using a Tensorflow framework. In the Tensorflow framework, a feature map of 14 × 14 is formed by using Roi posing (pooling of test frames) for each test frame, a feature map of 7 × 7 is formed by maximum pooling, and the sizes of feature vectors are unified. In order to reduce the head calculation complexity of the Faster R-CNN algorithm, 7 × 7 feature maps are input into a full-connection layer to synthesize all scale features, and a feature map with the dimension of 1024 is output. And classifying the defect types and positioning the defect positions in the image data through two fully-connected layers according to the feature diagram with the dimension of 1024. The first full-connection layer outputs defect categories, and the output dimension is the number of categories; the second full connection outputs the offset of the defect coordinates, i.e. the four point coordinates of the defect rectangular box, so the output dimension is 4 x the number of classes.
The improved FASTER R-CNN model is obtained through the above operation. And training the improved Faster R-CNN model by using the image data in the training set, comparing the training result with the result of the image data in the verification set, and adjusting the parameters of the improved Faster R-CNN model to obtain the trained improved Faster R-CNN model. Fig. 6 is a schematic diagram of the convergence of a LOSS curve training process according to an embodiment of the present invention, which is a curve plotted for the change of the LOSS function in the training.
Step S4, realizing defect target detection of contact net components based on improved Faster R-CNN algorithm
And carrying out defect target identification on the contact network image data to be subjected to defect identification by using the improved target detection algorithm in the FASTER R-CNN model. A detection box (NMS) for removing redundancy by using an overlapping rate in an object detection algorithm judges whether a defect object is detected or not, and the evaluation of the detection box is related to the overlapping rate. Many redundant boxes are created for each class of objects in the image in order to remove them. The distance between two boxes is usually denoted by IOU in the target detection algorithm. The IOU is the intersection ratio of two frames (i.e. the intersection area is larger than the upper phase area) if the IOU is larger than, for example, more than 0.7, the distance between the two frames is considered to be too far, and there are redundant frames. And then selecting which frame to delete according to the confidence of each frame.
Fig. 5 is a schematic diagram of a target detection cross-over ratio according to an embodiment of the present invention. Recording the overlapping rate of two detection frames as IOUresult,IOUresultSetting the threshold value of the cross-over ratio of two adjacent detection frames to be 0.5 when the IOU is usedresult>And 0.5, judging that the detected catenary image data contains the defect target. And obtaining the types of the prediction frames as a true case (TP), a false positive case (FP), a true counter case (TN) or a false counter case (FN) according to the defect detection result of the image data of the overhead contact system and the defect actual condition of the railway line corresponding to the image data.
True example: the actual value is true and the predicted value is true
False positive example: the actual value is false and the predicted value is true
The true counter example: true actual value and false predicted value
False negative example: the actual value is false and the predicted value is false
And calculating the performance evaluation index, P (accuracy), R (recall), AP (precision mean), mAP (mean precision mean) value of the improved Faster R-CNN model according to the data of the true positive example (TP), the false positive example (FP), the true negative example (TN) or the false negative example (FN), and adjusting the parameters of the improved fast R-CNN model according to the performance evaluation index, P, R, AP and the mAP value.
The accuracy, recall, AP value was calculated as follows
This example was validated with the unmanned aerial vehicle catenary image dataset and public dataset pascal voc2007 at the jinghu railway corridor section. The number of contact network images shot by the unmanned aerial vehicle at the Jingshang railway gallery section is 500. The data size is expanded into 2000 pieces by the traditional image processing means, and the 2000 pieces of data are divided into a training set, a verification set and a test set according to the ratio of 4:3: 3. And adjusting the size of the original image to 1500X1000 so as to reduce the occupation of the model on the memory. The detection categories are divided into flat-cantilever bird cover loosening, inclined-cantilever bird cover loosening, normal bird cover, split pin, U-shaped pipe and joint according to the detected positions. And extracting the features of the input picture through a ResNet101 network, and fusing semantic information of the top layer and position information of the bottom layer on the feature map based on the feature pyramid. And inputting an anchor frame generated by the characteristic diagram into the RPN, wherein the IOU of the anchor frame and the rectangular frame marked by the defect is more than 0.7 and is a positive sample, and the IOU of the anchor frame and the rectangular frame marked by the defect is less than 0.3 and is a negative sample. Neither negative nor positive samples are involved in RPN training. Before training the RPN network, for each anchor box, whether it is a binary label of the target object and the background is specified, and if the anchor box is a positive sample, the label is 1, otherwise it is 0. The RPN layer parameters were randomly initialized to a mean of 0 and a standard deviation of 0.01. The RPN randomly selects 256 detection frames as a batch, and the proportion of positive and negative samples is divided into 1: 1. Inputting the feature map into an ROI posing layer to be fixed to form a feature map with the size of 7 x 7, fusing multi-dimensional feature map information through a fully-connected layer with output dimensionality of 1024, and inputting two fully-connected layers to carry out classification and position regression of detection frames respectively. End-to-end training is achieved through back propagation and a random gradient descent algorithm.
In the experiment, different algorithms are compared in a Pascal VOC2007 data set, wherein the data set comprises 5011 training set images and 4952 testing set images. The included category is 20, and the experimental results are as follows
TABLE 1 Pascal VOC2007 data set algorithm comparison results
In the overhead line system data set, experiments compare the traditional target detectors such as HOG + SVM and the deep learning target detection network such as two-stage Faster R-CNN, the FPN-based Faster R-CNN network and the one-stage YoloV3 network, and different feature extraction networks such as VGG16, ResNet50 and ResNet101 are adopted. The number of network training steps is 80000 times, the learning rate is set to 0.001 by adopting a random gradient descent algorithm, and the momentum is 0.9. The detection threshold of the defect is set to be 0.6, and the detection index is mAP.
Table 2 comparison results of data set algorithm for unmanned aerial vehicle railway contact network
According to the test result, the improved Faster R-CNN algorithm achieves an mAP value of 81.2 in the railway data set and achieves 5.7% improvement, and achieves an MAP value of 80.1 in the VOC2007 data set and achieves 5.5% improvement. Therefore, the method achieves a better effect on the detection of the defect of the contact net and has an obvious practical application value.
In conclusion, the embodiment of the invention utilizes the unmanned aerial vehicle to shoot the railway contact network, and can obtain the contact network remote sensing image with better image quality under the condition of not influencing the operation of the line. Because the shooting angle can change and can avoid the contact net to shelter from each other the data that cause incomplete.
The improved Faster R-CNN algorithm is applied to the defect detection of the image data of the railway contact network, so that the automatic identification of the defects of the railway contact network components is realized, and the detection efficiency is improved. Compared with the prior art, the invention is superior to Faster R-CNN in the aspects of training speed and detection accuracy, and has certain application value.
The invention provides a method for detecting defects of multiple small targets of a contact network, which overcomes the defects of low classification precision of the small targets and inaccurate frame regression. The conventional Faster R-CNN directly maps the candidate region to the output feature map of the last layer of convolution, and due to the function of the pooling layer in the convolution process, the feature map obtained by the later convolution layer has lower resolution, so that the target is easily lost under the condition that the target is smaller. According to the invention, the feature pyramid-based top-down and bottom-up feature map fusion paths are added and then input into the full connection layer, so that the feature map obtains deeper semantic information and detail information of the feature map, and the method has better accuracy. The improved Faster R-CNN algorithm is utilized to reduce the characteristic loss caused by pooling and convolution, and the identification accuracy of small targets can be improved.
In the invention, the dimension of the anchor frame is modified according to the defect coordinate statistics of the experimental data set, the improved anchor frame is more suitable for the data set for detecting the defects of the contact network, and the problem of target loss caused by inappropriate proportion of the anchor frame is reduced. The feature diagram with thin dimension and the single full-connection layer are comprehensively utilized, so that the calculation complexity of the head of the model is reduced, and the detection efficiency of the model is improved.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. A method for identifying defects of an image of a contact network of a railway is characterized by comprising the following steps:
photographing a railway contact network to obtain an image data set of the railway contact network, and performing expansion processing on the image data set;
constructing an improved Faster R-CNN model based on image pyramid feature fusion, and training the improved Faster R-CNN model by using an image data set after expansion processing to obtain a trained improved Faster R-CNN model;
and carrying out defect target identification on the image data of the contact network to be identified by using the trained Faster R-CNN model through a target detection algorithm, and acquiring a defect target in the image data of the contact network to be identified.
2. The method of claim 1, wherein the photographing of the railway catenary is performed to obtain an image data set of the railway catenary, and the expanding of the image data set comprises:
shooting a railway contact network by using a mode that an unmanned aerial vehicle carries a camera, acquiring an image data set of the railway contact network, and performing image expansion processing on the image data set, wherein the image expansion processing comprises at least one of adjusting the contrast, brightness, mirror image, rotating the image angle and increasing noise of an image, and the image data set after the expansion processing is divided into a training set, a verification set and a test set;
the image data in the training set is uniformly adjusted to 1000X1500 pixels, defects in the image data in the training set are marked out by rectangular frames by using labelImage software, rectangular frame image data including image names, defect coordinates and defect types are generated, a plurality of anchor frame image data are generated by a kmean algorithm according to the rectangular frame image data, and the size and the length-width ratio of an anchor frame are set, wherein the anchor frame is used for predicting defect position information in the image data.
3. The method of claim 2, wherein the anchor frame is sized 32 x 32, 64 x 64, 128 x 128, 256 x 256, 512 x 512 with a ratio of length to width of 1:2, 1:1, 2:1, respectively.
4. The method according to claim 2 or 3, wherein the improved Faster R-CNN model is constructed based on image pyramid feature fusion, and the trained improved Faster R-CNN model is obtained by training the improved Faster R-CNN model with the extended image data set, and the method comprises the following steps:
constructing an improved Faster R-CNN model based on image pyramid feature fusion, inputting image data in the training set into a ResNet101 network by taking ResNet101 as a feature extraction network in the improved Faster R-CNN model, and obtaining anchor frame data generated by a feature fusion diagram according to the two feature fusion diagrams and the size and the length-width ratio of the set anchor frame by utilizing two feature fusion diagrams from top to bottom and from bottom to top of the image data extracted by the ResNet101 network;
inputting the anchor frame data into an RPN (resilient packet network), carrying out two-classification processing on the anchor frame data through a loss function by the RPN, screening out a foreground frame serving as a target object, wherein the target object is a defect in image data of a contact network of a railway, and synthesizing coordinate offset of the foreground frame and the anchor frame to obtain an area suggestion frame;
selecting a set number of area suggestion frames according to the confidence coefficient of each area suggestion frame from large to small, removing redundant detection of the area suggestion frames through an NMS non-maximum suppression algorithm, reserving the set number of area suggestion frames as detection frames, performing pooling operation on all the detection frames by using a Tensorflow frame to obtain a 7 x 7 feature map of the detection frames, inputting the feature map into a full-connection layer to synthesize all scale features, outputting a feature map with a dimensionality of 1024, obtaining the probability of each defect type through a first full-connection layer according to the feature map with the dimensionality of 1024, and realizing position adjustment of the detection frames of the defects through a second full-connection layer;
the improved Faster R-CNN model is obtained through the above operation. And training the improved Faster R-CNN model by using the image data in the training set, comparing the training result with the result of the image data in the verification set, and adjusting the parameters of the improved Faster R-CNN model to obtain the trained improved Faster R-CNN model.
5. The method according to claim 4, wherein the inputting of the image data in the training set into the ResNet101 network and the top-down and bottom-up feature fusion maps of the image data extracted by the ResNet101 network comprise:
inputting the image data in the training set into a ResNet101 network, wherein the ResNet101 is of a multilayer structure, extracting feature maps (C2, C3, C4 and C5) of each layer in residual blocks of which the number is 3, 4, 23 and 3 in the ResNet101 network in sequence, the feature maps with higher layer number have more semantic information and less image detail information, the feature maps with lower layer number have more detail information and less semantic information, forming the feature maps extracted from the topmost layer in the ResNet101 network into a feature map with the same size as the next layer through bilinear interpolation, forming a feature map with the same fusion dimension as the feature map of the upper layer through convolution kernel convolution of 1X1 in the lower layer, and forming new feature maps (P2, P3, P4 and P5) in sequence by superposing the feature maps of the upper layer and the next layer to form a feature fusion path based on a pyramid from top to bottom;
the feature map extracted from the bottommost layer in the ResNet101 network passes through a convolution kernel of 3X3 from bottom to top, a convolution with the step length of 2 forms a feature map with the same scale as that of an upper layer, and new feature maps (N2, N3, N4 and N5) are sequentially formed by overlapping and fusing the feature maps of a next layer and the upper layer to form a feature fusion path from bottom to top based on a feature pyramid;
two feature fusion graphs from top to bottom and from bottom to top are obtained through the above processing procedure, and the dimensions of the two feature fusion graphs are unified to be the same 256.
6. The method according to claim 5, wherein the step of performing defect target recognition on the image data of the catenary to be recognized through a target detection algorithm by using the trained Faster R-CNN model to obtain a defect target in the image data of the catenary to be recognized comprises the steps of:
utilizing the target detection algorithm in the improved Faster R-CNN model to identify the defect target of the contact network image data to be subjected to defect identification, and recording the overlapping rate of two adjacent detection frames as IOUresultSetting the threshold value to be 0.5 when IOUresult>0.5, judging that the detected catenary image data contain the defect target;
obtaining the type of a prediction frame as a true example TP, a false positive example FP, a true negative example TN or a false negative example FN according to the defect detection result of the contact network image data and the defect actual condition of the railway line corresponding to the image data, calculating the values of the performance evaluation index P accuracy, the R recall rate, the AP precision mean value and the mAP average precision mean value of the improved Faster R-CNN model according to the data of the true positive example TP, the false positive example FP, the true negative example TN or the false negative example, and adjusting the parameters of the improved Faster R-CNN model according to the values of the performance evaluation index P accuracy, the R recall rate, the AP precision mean value and the mAP average precision mean value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010269412.0A CN111951212A (en) | 2020-04-08 | 2020-04-08 | Method for identifying defects of contact network image of railway |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010269412.0A CN111951212A (en) | 2020-04-08 | 2020-04-08 | Method for identifying defects of contact network image of railway |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111951212A true CN111951212A (en) | 2020-11-17 |
Family
ID=73337075
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010269412.0A Pending CN111951212A (en) | 2020-04-08 | 2020-04-08 | Method for identifying defects of contact network image of railway |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111951212A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112489040A (en) * | 2020-12-17 | 2021-03-12 | 哈尔滨市科佳通用机电股份有限公司 | Truck auxiliary reservoir falling fault identification method |
CN112686190A (en) * | 2021-01-05 | 2021-04-20 | 北京林业大学 | Forest fire smoke automatic identification method based on self-adaptive target detection |
CN112700442A (en) * | 2021-02-01 | 2021-04-23 | 浙江驿公里智能科技有限公司 | Die-cutting machine workpiece defect detection method and system based on Faster R-CNN |
CN112733747A (en) * | 2021-01-14 | 2021-04-30 | 哈尔滨市科佳通用机电股份有限公司 | Identification method, system and device for relieving falling fault of valve pull rod |
CN112767351A (en) * | 2021-01-19 | 2021-05-07 | 孙杨 | Transformer equipment defect detection method based on sensitive position dependence analysis |
CN112819748A (en) * | 2020-12-16 | 2021-05-18 | 机科发展科技股份有限公司 | Training method and device for strip steel surface defect recognition model |
CN113034446A (en) * | 2021-03-08 | 2021-06-25 | 国网山东省电力公司平邑县供电公司 | Automatic transformer substation equipment defect identification method and system |
CN113052103A (en) * | 2021-03-31 | 2021-06-29 | 株洲时代电子技术有限公司 | Electrical equipment defect detection method and device based on neural network |
CN113077431A (en) * | 2021-03-30 | 2021-07-06 | 太原理工大学 | Laser chip defect detection method, system, equipment and storage medium based on deep learning |
CN113095288A (en) * | 2021-04-30 | 2021-07-09 | 浙江吉利控股集团有限公司 | Obstacle missing detection repairing method, device, equipment and storage medium |
CN113255589A (en) * | 2021-06-25 | 2021-08-13 | 北京电信易通信息技术股份有限公司 | Target detection method and system based on multi-convolution fusion network |
CN113269739A (en) * | 2021-05-19 | 2021-08-17 | 绍兴文理学院 | Quantitative detection method for wood knot defects |
CN113537045A (en) * | 2021-07-14 | 2021-10-22 | 宁夏大学 | Rock picture detection method based on improved FasterR-CNN |
CN113592839A (en) * | 2021-08-06 | 2021-11-02 | 广东电网有限责任公司 | Distribution network line typical defect diagnosis method and system based on improved fast RCNN |
CN113689392A (en) * | 2021-08-18 | 2021-11-23 | 北京理工大学 | Railway fastener defect detection method and device |
CN113989487A (en) * | 2021-10-20 | 2022-01-28 | 国网山东省电力公司信息通信公司 | Fault defect detection method and system for live-action scheduling |
CN114743119A (en) * | 2022-04-28 | 2022-07-12 | 石家庄铁道大学 | High-speed rail contact net dropper nut defect detection method based on unmanned aerial vehicle |
CN115294451A (en) * | 2022-01-14 | 2022-11-04 | 中国铁路兰州局集团有限公司 | Method and device for detecting foreign matters on high-voltage line |
CN115359307A (en) * | 2022-10-24 | 2022-11-18 | 成都诺比侃科技有限公司 | Contact network loss inspection defect data management method and system based on big data |
JP2023527615A (en) * | 2021-04-28 | 2023-06-30 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Target object detection model training method, target object detection method, device, electronic device, storage medium and computer program |
CN117197700A (en) * | 2023-11-07 | 2023-12-08 | 成都中轨轨道设备有限公司 | Intelligent unmanned inspection contact net defect identification system |
CN117474873A (en) * | 2023-11-03 | 2024-01-30 | 湖南派驰机械有限公司 | Surface treatment system before brazing of high-chromium wear-resistant castings |
CN117690096A (en) * | 2024-02-04 | 2024-03-12 | 成都中轨轨道设备有限公司 | Contact net safety inspection system adapting to different scenes |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
CN109829893A (en) * | 2019-01-03 | 2019-05-31 | 武汉精测电子集团股份有限公司 | A kind of defect object detection method based on attention mechanism |
CN110555842A (en) * | 2019-09-10 | 2019-12-10 | 太原科技大学 | Silicon wafer image defect detection method based on anchor point set optimization |
CN110827251A (en) * | 2019-10-30 | 2020-02-21 | 江苏方天电力技术有限公司 | Power transmission line locking pin defect detection method based on aerial image |
CN110853015A (en) * | 2019-11-12 | 2020-02-28 | 中国计量大学 | Aluminum profile defect detection method based on improved Faster-RCNN |
-
2020
- 2020-04-08 CN CN202010269412.0A patent/CN111951212A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
CN109829893A (en) * | 2019-01-03 | 2019-05-31 | 武汉精测电子集团股份有限公司 | A kind of defect object detection method based on attention mechanism |
CN110555842A (en) * | 2019-09-10 | 2019-12-10 | 太原科技大学 | Silicon wafer image defect detection method based on anchor point set optimization |
CN110827251A (en) * | 2019-10-30 | 2020-02-21 | 江苏方天电力技术有限公司 | Power transmission line locking pin defect detection method based on aerial image |
CN110853015A (en) * | 2019-11-12 | 2020-02-28 | 中国计量大学 | Aluminum profile defect detection method based on improved Faster-RCNN |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819748B (en) * | 2020-12-16 | 2023-09-19 | 机科发展科技股份有限公司 | Training method and device for strip steel surface defect recognition model |
CN112819748A (en) * | 2020-12-16 | 2021-05-18 | 机科发展科技股份有限公司 | Training method and device for strip steel surface defect recognition model |
CN112489040A (en) * | 2020-12-17 | 2021-03-12 | 哈尔滨市科佳通用机电股份有限公司 | Truck auxiliary reservoir falling fault identification method |
CN112686190A (en) * | 2021-01-05 | 2021-04-20 | 北京林业大学 | Forest fire smoke automatic identification method based on self-adaptive target detection |
CN112733747A (en) * | 2021-01-14 | 2021-04-30 | 哈尔滨市科佳通用机电股份有限公司 | Identification method, system and device for relieving falling fault of valve pull rod |
CN112767351A (en) * | 2021-01-19 | 2021-05-07 | 孙杨 | Transformer equipment defect detection method based on sensitive position dependence analysis |
CN112767351B (en) * | 2021-01-19 | 2024-04-16 | 孙杨 | Substation equipment defect detection method based on sensitive position dependence analysis |
CN112700442A (en) * | 2021-02-01 | 2021-04-23 | 浙江驿公里智能科技有限公司 | Die-cutting machine workpiece defect detection method and system based on Faster R-CNN |
CN113034446A (en) * | 2021-03-08 | 2021-06-25 | 国网山东省电力公司平邑县供电公司 | Automatic transformer substation equipment defect identification method and system |
CN113077431A (en) * | 2021-03-30 | 2021-07-06 | 太原理工大学 | Laser chip defect detection method, system, equipment and storage medium based on deep learning |
CN113052103A (en) * | 2021-03-31 | 2021-06-29 | 株洲时代电子技术有限公司 | Electrical equipment defect detection method and device based on neural network |
JP2023527615A (en) * | 2021-04-28 | 2023-06-30 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Target object detection model training method, target object detection method, device, electronic device, storage medium and computer program |
CN113095288A (en) * | 2021-04-30 | 2021-07-09 | 浙江吉利控股集团有限公司 | Obstacle missing detection repairing method, device, equipment and storage medium |
CN113269739A (en) * | 2021-05-19 | 2021-08-17 | 绍兴文理学院 | Quantitative detection method for wood knot defects |
CN113269739B (en) * | 2021-05-19 | 2024-02-27 | 绍兴文理学院 | Quantitative detection method for wood node defects |
CN113255589B (en) * | 2021-06-25 | 2021-10-15 | 北京电信易通信息技术股份有限公司 | Target detection method and system based on multi-convolution fusion network |
CN113255589A (en) * | 2021-06-25 | 2021-08-13 | 北京电信易通信息技术股份有限公司 | Target detection method and system based on multi-convolution fusion network |
CN113537045A (en) * | 2021-07-14 | 2021-10-22 | 宁夏大学 | Rock picture detection method based on improved FasterR-CNN |
CN113592839B (en) * | 2021-08-06 | 2023-01-13 | 广东电网有限责任公司 | Distribution network line typical defect diagnosis method and system based on improved fast RCNN |
CN113592839A (en) * | 2021-08-06 | 2021-11-02 | 广东电网有限责任公司 | Distribution network line typical defect diagnosis method and system based on improved fast RCNN |
CN113689392A (en) * | 2021-08-18 | 2021-11-23 | 北京理工大学 | Railway fastener defect detection method and device |
CN113989487A (en) * | 2021-10-20 | 2022-01-28 | 国网山东省电力公司信息通信公司 | Fault defect detection method and system for live-action scheduling |
CN115294451B (en) * | 2022-01-14 | 2023-04-28 | 中国铁路兰州局集团有限公司 | Method and device for detecting foreign matters on high-voltage line |
CN115294451A (en) * | 2022-01-14 | 2022-11-04 | 中国铁路兰州局集团有限公司 | Method and device for detecting foreign matters on high-voltage line |
CN114743119B (en) * | 2022-04-28 | 2024-04-09 | 石家庄铁道大学 | High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle |
CN114743119A (en) * | 2022-04-28 | 2022-07-12 | 石家庄铁道大学 | High-speed rail contact net dropper nut defect detection method based on unmanned aerial vehicle |
CN115359307B (en) * | 2022-10-24 | 2023-01-03 | 成都诺比侃科技有限公司 | Big data-based overhead contact system wear-checking defect data management method and system |
CN115359307A (en) * | 2022-10-24 | 2022-11-18 | 成都诺比侃科技有限公司 | Contact network loss inspection defect data management method and system based on big data |
CN117474873A (en) * | 2023-11-03 | 2024-01-30 | 湖南派驰机械有限公司 | Surface treatment system before brazing of high-chromium wear-resistant castings |
CN117474873B (en) * | 2023-11-03 | 2024-04-09 | 湖南派驰机械有限公司 | Surface treatment system before brazing of high-chromium wear-resistant castings |
CN117197700A (en) * | 2023-11-07 | 2023-12-08 | 成都中轨轨道设备有限公司 | Intelligent unmanned inspection contact net defect identification system |
CN117197700B (en) * | 2023-11-07 | 2024-01-26 | 成都中轨轨道设备有限公司 | Intelligent unmanned inspection contact net defect identification system |
CN117690096A (en) * | 2024-02-04 | 2024-03-12 | 成都中轨轨道设备有限公司 | Contact net safety inspection system adapting to different scenes |
CN117690096B (en) * | 2024-02-04 | 2024-04-12 | 成都中轨轨道设备有限公司 | Contact net safety inspection system adapting to different scenes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111951212A (en) | Method for identifying defects of contact network image of railway | |
CN111767882B (en) | Multi-mode pedestrian detection method based on improved YOLO model | |
CN110032962B (en) | Object detection method, device, network equipment and storage medium | |
CN109447169B (en) | Image processing method, training method and device of model thereof and electronic system | |
CN108334848B (en) | Tiny face recognition method based on generation countermeasure network | |
CN111899227A (en) | Automatic railway fastener defect acquisition and identification method based on unmanned aerial vehicle operation | |
CN113920107A (en) | Insulator damage detection method based on improved yolov5 algorithm | |
CN112633149B (en) | Domain-adaptive foggy-day image target detection method and device | |
CN109145747A (en) | A kind of water surface panoramic picture semantic segmentation method | |
CN109376580B (en) | Electric power tower component identification method based on deep learning | |
CN114399734A (en) | Forest fire early warning method based on visual information | |
CN111126278A (en) | Target detection model optimization and acceleration method for few-category scene | |
CN110991447B (en) | Train number accurate positioning and identifying method based on deep learning | |
CN114049572A (en) | Detection method for identifying small target | |
CN115546763A (en) | Traffic signal lamp identification network training method and test method based on visual ranging | |
CN114495010A (en) | Cross-modal pedestrian re-identification method and system based on multi-feature learning | |
CN113139896A (en) | Target detection system and method based on super-resolution reconstruction | |
CN114332921A (en) | Pedestrian detection method based on improved clustering algorithm for Faster R-CNN network | |
CN109919223A (en) | Object detection method and device based on deep neural network | |
CN114677618A (en) | Accident detection method and device, electronic equipment and storage medium | |
CN112347967B (en) | Pedestrian detection method fusing motion information in complex scene | |
CN114550023A (en) | Traffic target static information extraction device | |
CN117789077A (en) | Method for predicting people and vehicles for video structuring in general scene | |
CN112613442A (en) | Video sequence emotion recognition method based on principle angle detection and optical flow conversion | |
CN116824641A (en) | Gesture classification method, device, equipment and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |