CN109034184B - Grading ring detection and identification method based on deep learning - Google Patents

Grading ring detection and identification method based on deep learning Download PDF

Info

Publication number
CN109034184B
CN109034184B CN201810582294.1A CN201810582294A CN109034184B CN 109034184 B CN109034184 B CN 109034184B CN 201810582294 A CN201810582294 A CN 201810582294A CN 109034184 B CN109034184 B CN 109034184B
Authority
CN
China
Prior art keywords
image
grading ring
detection
deep learning
identification method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810582294.1A
Other languages
Chinese (zh)
Other versions
CN109034184A (en
Inventor
卢胜标
夏良标
莫止范
冯鹏程
颜毓宏
陈健文
石英
韩西坪
庞统
刘晓伟
李德洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Arover Technology Co ltd
Wuhan Jiaming Kaier Electrical Technology Development Co ltd
Yulin Power Supply Bureau of Guangxi Power Grid Co Ltd
Original Assignee
Wuhan Arover Technology Co ltd
Wuhan Jiaming Kaier Electrical Technology Development Co ltd
Yulin Power Supply Bureau of Guangxi Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Arover Technology Co ltd, Wuhan Jiaming Kaier Electrical Technology Development Co ltd, Yulin Power Supply Bureau of Guangxi Power Grid Co Ltd filed Critical Wuhan Arover Technology Co ltd
Priority to CN201810582294.1A priority Critical patent/CN109034184B/en
Publication of CN109034184A publication Critical patent/CN109034184A/en
Application granted granted Critical
Publication of CN109034184B publication Critical patent/CN109034184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a grading ring detection and identification method based on deep learning, relates to the field of power equipment maintenance, and comprises the steps of grading ring image sample preprocessing, feature extraction, detection model training, grading ring component detection and the like. The grading ring detection and identification method based on deep learning can meet the grading ring detection and identification requirements under the complex background of unspecific angles, and achieves a good using effect.

Description

Grading ring detection and identification method based on deep learning
Technical Field
The invention relates to the technical field of power equipment maintenance, in particular to a grading ring detection and identification method based on deep learning.
Background
The equalizing ring, namely the equipotential connecting ring, is an important component of the power transmission line and is used for eliminating no potential difference between all annular parts, so that the effect of equalizing voltage is achieved. Because transmission line will stand wind and rain, the sunshine for a long time, including self mechanical fatigue, can make the equalizer ring take place trouble defects such as slope, corrosion, make equalizer ring can not normal performance, and then lead to transmission line to take place danger. The grading ring on the power transmission line has the main functions of enabling voltage to be uniformly distributed on the whole insulator or the insulator string, enabling the phase voltage not to be uniformly distributed on the whole insulator or the insulator string once a fault occurs, and accelerating aging of the insulator. Therefore, periodic maintenance and repair of the grading rings is required. The detection and identification are carried out manually according to the traditional method, which not only consumes time, but also consumes manpower and financial resources. With the development of automation, intellectualization and high-speed technology, how to rapidly and accurately detect, identify and identify the grading ring of the power transmission line has become a popular topic in the field of digital image processing.
At present, methods for identifying grading rings are mainly divided into two main categories: the algorithm based on template detection and identification and the method based on machine learning are mainly characterized in that the former mainly carries out template matching on an original image through some common algorithms, and the latter mainly carries out identification through learning the characteristics of a grading ring. At present, some researches on grading ring identification are carried out at home and abroad, and the Wangshen respectively utilizes a BP neural network and an Adaboost machine learning algorithm to realize the identification of the grading rings and demonstrate the feasibility of identifying and positioning a plurality of grading rings in an image acquired under a high-speed condition. Zhangnan et al realize grading ring positioning by means of grading ring template matching method and light reflection point characteristics. However, the above methods applied to identifying the grading ring fault of the power transmission line have certain defects: firstly, the accuracy is not high, and the application range is narrow; secondly, because the acquisition of the images of the grading rings in the power system is mostly finished manually or by aerial photography, the images are shot in a specific illumination environment at a required specific angle and a required specific focal length, the background change is more, the method is more complex, the method is not universal, and the method cannot be well applied to an actual system.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a grading ring detection and identification method based on deep learning, which can meet the grading ring detection and identification requirements under the complex background of unspecific angle.
In order to achieve the above purposes, the technical scheme adopted by the invention is as follows:
a grading ring detection and identification method based on deep learning comprises the following steps:
s1, preprocessing the acquired original grading ring image serving as an image source;
s2, extracting multi-level characteristics of the preprocessed grading ring image samples by using a convolutional neural network to form a grading ring training image;
s3, combining the obtained grading ring training images to form a sample set, inputting the sample set into a detection model to be trained, adjusting weights and bias parameters in a convolutional neural network in an unsupervised forward propagation algorithm and backward propagation algorithm alternate mode, and finally determining optimized model parameters;
and S4, initializing the detection network according to the optimized model parameters, collecting picture data of the power transmission line in batches, and automatically identifying and positioning the grading rings.
On the basis of the above technical solution, in the step S1, preprocessing is performed according to problems existing in the captured image, where the existing problems include shaking and blurring, and the preprocessing includes anti-shaking and denoising; and denoising the original grading ring image by using bilateral filtering or median filtering.
On the basis of the above technical solution, before the step S2, the method further includes: on the basis of preprocessing the image, multiple similar equalizing ring images are generated by carrying out multiple rotations, scale disturbance and/or color space transformation on the preprocessed equalizing ring image.
On the basis of the technical scheme, the specific method for color space transformation comprises the following steps:
carrying out Principal Component Analysis (PCA) principal component transformation on the RGB image of the sample set to obtain principal component variables and corresponding characteristic values thereof;
different coefficients are distributed to the characteristic values to realize the transformation of the illumination intensity and the saturation degree of the image, and the principal component variables and the corresponding characteristic values in the steps are calculated by the following formulas:
Figure BDA0001688549300000031
wherein p isi(i is 1,2,3) is a feature vector corresponding to the RGB channels of the image, λi(i is 1,2,3) is the eigenvalue corresponding to the eigenvector, αi(i ═ 1,2,3) is a coefficient of perturbation for each eigenvalue, obtained by a gaussian function with a mean value of 1 and a standard deviation of 0.1.
On the basis of the above technical solution, the step S2 specifically includes:
generating a large number of candidate regions in the preprocessed grading ring image sample by using a visual method; performing feature extraction on each candidate region by using a convolutional neural network to form a high-dimensional feature vector; sending the obtained high-dimensional feature vector into a linear classifier, calculating the probability of belonging to each class, and judging the object contained in the probability; the position and size of the target peripheral frame are calculated by fine regression.
Based on the above technical solution, in step S3, a vector pair composed of an input vector and an ideal output vector is used to input into the detection model to be trained.
On the basis of the above technical solution, the step S3 includes:
adding a new convolutional layer after training the feature diagram of the image network to the next convolutional layer; performing convolution operation on the convolution layer to obtain a multi-dimensional feature vector corresponding to each position, and predicting the probability of each position belonging to a target through the feature vector; all the features in the multi-dimensional feature vector channel are connected in series to form a high-dimensional feature vector as an input vector.
On the basis of the above technical solution, the step S4 specifically includes:
performing convolution operation on an input image to obtain a characteristic diagram; generating a plurality of candidate region boxes on the feature map by using the region suggestion network; grading and screening the content of the candidate region frames through a non-maximum suppression algorithm, and reserving the candidate region frames with higher scores according to a preset number; and taking the features in the candidate region frame on the feature map to form a high-dimensional feature vector, calculating a category score by a detection network, and predicting a more appropriate target peripheral frame position.
On the basis of the above technical solution, in step S4, whether the candidate region frame is the target region is determined by the classification function, and the target is obtained by the frame regression function.
On the basis of the above technical solution, before the step S3, all the weight values in the weight matrix are initialized using different small random numbers.
Compared with the prior art, the invention has the advantages that:
(1) the grading ring detection and identification method based on deep learning carries out self training and learning through the deep convolution neural network, expands a data set through image transformation, adjusts the convolution kernel size and the reference matrix value in the CNN model and other steps, realizes automatic grading ring identification and detection on common inspection photos with complex background conditions, can greatly reduce the cost of inspection personnel in practical application, improves the working efficiency, and realizes effective evaluation on the safety state of an electric power system.
(2) The grading ring detection and identification method based on deep learning extracts the feature blocks of the image to be identified through the convolutional neural network, and performs autonomous feature extraction and learning, so that the time cost of the self-learning process is greatly reduced, and the problems that a large number of professionals are required to manually calibrate feature pictures for learning and the manual calibration cost is too high in the prior art are solved.
Drawings
FIG. 1 is a flowchart of a grading ring identification method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a sample diagram collected by the grading ring identification method based on deep learning in the embodiment of the invention;
FIG. 3 is a diagram of a deep convolutional network structure of a deep learning-based equalizer ring identification method in an embodiment of the present invention;
FIG. 4 is a sigmoid function of the grading ring identification method based on deep learning in the embodiment of the present invention;
fig. 5 is a diagram of equalizing ring detection and identification effects of the equalizing ring identification method based on deep learning in the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The convolutional neural network used in the present invention is a multi-layered perceptron specifically designed for recognizing two-dimensional shapes, and the network structure is highly invariant to translation, scaling, tilting, or other forms of deformation.
Each layer is composed of a plurality of two-dimensional planes, also called feature maps (feature maps), each feature map being composed of a plurality of individual neurons. Fig. 4 provides an example of the use of a convolutional network in the present invention, the operation of which is as follows:
the input layer receives the original image and then convolves and downsamples alternately, the more layers, the more global the features are expressed, as follows:
the first hidden layer is convoluted and consists of 8 feature maps, each feature map consists of 28 x 28 neurons, and each neuron is assigned with a signal of a 5 x 5 receiving domain;
the second hidden layer implements local subsampling and local averaging, which again consists of 8 feature maps, but each of which consists of 14 × 14 neurons. According to the principle of local correlation in image space, the sub-sampling of the image can remove irrelevant information and retain important characteristic information. Each neuron has a 2 x 2 acceptance domain, a trainable coefficient, a trainable bias and a sigmoid excitation function as shown in figure 5.
The third hidden layer is convolved a second time and consists of 20 feature maps, each feature map consisting of 10 × 10 neuron channels. Each neuron channel in the hidden layer may have synaptic connections to several feature maps of the next hidden layer, which operates in a similar manner as the first convolutional layer.
The fourth hidden layer is sub-sampled for the second time and calculated for the local average juice. It consists of 20 signatures, but each signature consists of 5 × 5 neurons, which operates in a similar manner to the first sample.
The fifth hidden layer implements the final stage of convolution, which consists of 120 neuron channels, each neuron assigned a 5 × 5 receptive field.
And finally, obtaining an output vector by a full connection layer.
Referring to fig. 1, an embodiment of the present invention provides a deep learning-based equalizer ring identification method, including the following steps:
s1, grading ring image sample preprocessing: the method comprises the steps of taking an acquired original grading ring image as an image source, detecting whether a shot image has problems of shaking, blurring and the like, and carrying out processing such as denoising and anti-shaking;
s2, feature extraction: extracting multi-level characteristics of the preprocessed grading ring image samples by using a convolutional neural network to form a grading ring training image;
s3, training a detection model: combining the obtained equalizer ring training images to form a sample set, inputting the sample set into a detection model to be trained, adjusting weights and bias parameters in a convolutional neural network in a mode of alternately carrying out an unsupervised forward propagation algorithm and a backward propagation algorithm, and finally determining optimized model parameters;
s4, equalizing ring component detection: and initializing a detection network according to the model parameters obtained by the identification training, collecting the power transmission line picture data in batches, and automatically identifying and positioning the grading rings.
The following is a detailed description of each step:
and S1, preprocessing the grading ring image sample. Because the equalizing ring image is influenced by factors such as shooting conditions, ground oil stain, CCD noise, human factors and the like in the acquisition process, the acquired equalizing ring image can be subjected to noise interference. Therefore, the embodiment firstly carries out denoising processing on the original image, can improve the signal-to-noise ratio of the image, can effectively enhance the characteristics of the grading ring, inhibit part of background noise and enhance the contrast of the grading ring and the background. When the conventional algorithm is used for denoising, a fuzzy effect may be generated on the edge of the grading ring. The bilateral filter consists of two functions: one function is the determination of filter coefficients from geometric spatial distances and the other is the determination of filter coefficients from pixel differences, as embodied in the prior art. Therefore, the bilateral filter has good denoising and edge-preserving effects. Of course, besides bilateral filtering, other filtering methods, such as median filtering, may also be used. And taking the preprocessed road surface image as an original image to perform subsequent steps.
Preferably, the obtained image data is augmented after being preprocessed.
The overfitting problem frequently occurs during training, which means that the trained model fits well to the training samples, but the prediction effect on actual data other than the samples is poor, namely the generalization capability is poor. The most common method of reducing overfitting in convolutional neural networks is data expansion. The number of samples is enough, the variety is rich enough, the higher the recognition precision of recognition detection is, and the less overfitting condition is. Therefore, in order to enhance the robustness of the detection and identification method, before the sample is collected, the preprocessed image is subjected to data expansion, the image contrast is changed, analog noise is artificially added, and the like, so that the diversity of the sample to be collected is expanded, and the grading ring can still be accurately identified under the background conditions of different shooting angles, shooting weather, shooting scales and the like. The adopted method comprises the following steps: geometric transformation, color space perturbation, and scale perturbation.
(1) Geometric change: the image is rotated within a predetermined range, and the size and displacement of the image are changed to generate new images under different geometric transformation conditions. The original image is rotated in 8 different directions, and the rotated image is horizontally flipped, so that the size of the obtained data set is 16 times of the original size. The angle of the equalizer ring shot by the unmanned aerial vehicle is various, and a high-precision equalizer ring detection model can be well trained through data after geometric transformation expansion, and the condition of overfitting can be well avoided.
(2) Color space perturbation: the color intensity of RGB is changed by performing operation calculation on the RGB color channels of the image. The most common method is to perform Principal Component Analysis (PCA) conversion on the RGB images of the sample set to obtain principal component variables and corresponding characteristic values thereof, and distribute different coefficients to the characteristic values to realize the disturbance of the illumination intensity and the saturation degree of the images. The specific method is shown in the following formula:
Figure BDA0001688549300000081
wherein p isi(i is 1,2,3) is a feature vector corresponding to the RGB channels of the image, λi(i is 1,2,3) is the eigenvalue corresponding to the eigenvector, αi(i ═ 1,2,3) is a coefficient of perturbation for each eigenvalue, obtained by a gaussian function with a mean value of 1 and a standard deviation of 0.1.
In addition, the image contrast transformation can be realized by linearly transforming the pixels of the three color spaces around the average pixel intensity plus or minus one standard deviation or two standard deviations.
The method has the advantages that the disturbance of the color space realized by various methods on the image illumination intensity, the color saturation, the contrast and the like is good, the richness of the sample data set is greatly increased, the overfitting condition caused by the shooting illumination condition and the different colors of the equalizing ring is reduced, and the identification precision of the model is effectively improved.
(3) Scale perturbation
The scale disturbance is to increase the diversity of the shapes and the sizes of the target objects in the training sample set by carrying out transformation interference on the sizes and the shapes of the target objects. The size of the image input is defined as H W, and S is the shortest side after scaling. The clipping scaling is fixed at 224 x 224 and theoretically the scaling parameter S can take any value not less than 224. If S is 224, the cropping takes a short-edge image, and if S is much larger than 224, the cropped portion corresponds to a portion of the object image or contains the entire small object. And (3) setting a scale scaling parameter S to achieve the effect of scale disturbance, and performing data expansion on the sample set. The following two strategies are mainly adopted to realize the scale disturbance: one is single-scale data expansion: the fixed scale scaling parameter S is 256, namely, the short edge of the image is scaled to 256, and then a 224 × 224 image is randomly cut out on the basis of the scaling parameter S, and the cut image is used as the input of the convolutional neural network; secondly, multi-scale data expansion: and setting a range of the scale scaling parameter S, and randomly selecting S values with different sizes to scale the sample image in the range interval. The data expansion of the original sample set is realized by a method combining single-scale disturbance and multi-scale disturbance, so that the requirements of unmanned aerial vehicle shooting of characteristics of different sizes and different shapes of the equalizing ring are met, the abundance of the shapes and the sizes of the equalizing ring sample set is enhanced, and the detection and identification precision of the model is improved.
S2, a characteristic extraction step, namely extracting multi-level characteristics of the image from the preprocessed grading ring image sample by using a convolutional neural network to form a grading ring training image; specifically, a visual method can be used to generate a large number of candidate regions in the preprocessed image sample of the equalizer ring; performing feature extraction on each candidate region by using a convolutional neural network to form a high-dimensional feature vector; sending the obtained high-dimensional feature vector into a linear classifier, calculating the probability of belonging to each class, and judging the object contained in the probability; the position and size of the target peripheral frame are calculated by fine regression.
Considering that the grading rings comprise multiple types such as double-string grading rings, grading shield rings, single-string grading rings, composite insulator grading rings and the like, the type of a sample to be collected must be determined firstly before identification, and the grading ring sample is collected according to each type. Therefore, each grading ring category can ensure that the number of samples is enough and the scenes are rich enough as far as possible. The richer the sample, the better the effect of grading ring detection and identification, and the acquisition results are shown in fig. 2-3.
S3, training a detection model: combining the obtained equalizer ring training images to form a sample set, inputting the sample set into a detection model to be trained, adjusting weights and bias parameters in a convolutional neural network in a mode of alternately carrying out an unsupervised forward propagation algorithm and a backward propagation algorithm, and finally determining optimized model parameters;
specifically, a sample set can be formed by combining the equalizer ring training images obtained in step S2, and a pre-training sample set is initialized; adding a new convolutional layer after training the feature diagram of the image network to the next convolutional layer; performing convolution operation on the convolution layer to obtain a multi-dimensional feature vector corresponding to each position, and predicting the probability of each position belonging to a target through the feature vector; all the features in the multi-dimensional feature vector channel are connected in series to form a high-dimensional feature vector as an input vector; forming a vector pair by the input vector and a pre-marked ideal output vector;
and continuously adjusting the weight and the bias parameters in the convolutional neural network by using the vector pair in a mode of alternately carrying out an unsupervised forward propagation algorithm and an unsupervised backward propagation algorithm.
The convolutional network performs training with a pilot, so its sample set is made up of pairs of vectors in the form of input vectors, ideal output vectors. All these vector pairs should be the actual "running" results from the system that the network is about to simulate. The training process is mainly divided into two stages, as follows:
A. forward propagation
1) Taking a sample (X, Y) from the sample set, and inputting X into the network;
2) corresponding actual output O is calculated through repeated calculation of convolution, sub-sampling, excitation function, full connection and the like, and information is transmitted to an output layer from an input layer through gradual conversion at the stage. This process is also the process that the network performs during normal operation after training is completed.
B. Backward propagation
1) Firstly, calculating the difference between an actual output result O and a corresponding ideal output Y;
2) based on the difference result, the adjustment weight matrix is propagated back in a manner that minimizes the error.
Through the learning of the collected samples, the weight parameters in the convolutional neural network are continuously adjusted, and finally, the neural network model can be obtained.
S4, equalizing ring component detection: and initializing a detection network according to the model parameters obtained by the identification training, collecting the power transmission line picture data in batches, and automatically identifying and positioning the grading rings. And finally, determining the position of the grading ring in the image by using the determined neural network model and taking the image to be detected as an input variable through continuous mapping of the neural network model to obtain a detection and identification result.
Specifically, step S4 includes the following: performing convolution operation on an input image to obtain a characteristic diagram; generating a plurality of candidate region boxes on the feature map by using the region suggestion network; carrying out scoring screening on the content of the candidate region frame by a non-maximum suppression algorithm, judging whether the candidate region frame is a target region or not by a classification function, and reserving the candidate region frames with higher scores according to a preset number; and (3) taking the features in the candidate region frame on the feature map to form a high-dimensional feature vector, calculating a category score by a detection network, obtaining a target frame through a frame regression function, and predicting a more appropriate position of the target peripheral frame.
The present invention is not limited to the above-described embodiments, and it will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and such modifications and improvements are also considered to be within the scope of the present invention. Those not described in detail in this specification are within the skill of the art.

Claims (7)

1. A grading ring detection and identification method based on deep learning is characterized by comprising the following steps:
s1, preprocessing the acquired original grading ring image serving as an image source;
s2, extracting multi-level characteristics of the preprocessed grading ring image samples by using a convolutional neural network to form a grading ring training image;
s3, combining the obtained grading ring training images to form a sample set, inputting the sample set into a detection model to be trained, adjusting weights and bias parameters in a convolutional neural network in an unsupervised forward propagation algorithm and backward propagation algorithm alternate mode, and finally determining optimized model parameters;
s4, initializing a detection network according to the optimized model parameters, collecting picture data of the power transmission line in batches, and automatically identifying and positioning the grading rings;
before the step S2, the method further includes: on the basis of preprocessing the image, generating a plurality of similar equalizing ring images by carrying out geometric change, scale disturbance and/or color space transformation on the preprocessed equalizing ring image;
the specific method of the geometric change is as follows: rotating the image within a specified range, changing the size and displacement of the image to generate new images under different geometric transformation conditions;
the scale disturbance is to increase the diversity of the shape and the size of the target object in the training sample set by carrying out transformation interference on the size and the shape of the target object; the specific method is that single-scale disturbance and multi-scale disturbance are combined;
in step S3, a vector pair consisting of an input vector and an ideal output vector is input into the detection model to be trained;
the step S3 includes:
adding a new convolutional layer after training the feature diagram of the image network to the next convolutional layer; performing convolution operation on the convolution layer to obtain a multi-dimensional feature vector corresponding to each position, and predicting the probability of each position belonging to a target through the feature vector; all the features in the multi-dimensional feature vector channel are connected in series to form a high-dimensional feature vector as an input vector.
2. The deep learning-based grading ring detection and identification method according to claim 1, wherein: in step S1, preprocessing is performed according to the problems existing in the captured image, where the existing problems include shaking and blurring, and the preprocessing includes anti-shaking and denoising; and denoising the original grading ring image by using bilateral filtering or median filtering.
3. The deep learning-based grading ring detection and identification method according to claim 1, wherein the specific method of color space transformation comprises:
carrying out Principal Component Analysis (PCA) principal component transformation on the RGB image of the sample set to obtain principal component variables and corresponding characteristic values thereof;
different coefficients are distributed to the characteristic values to realize the transformation of the illumination intensity and the saturation degree of the image, and the principal component variables and the corresponding characteristic values in the steps are calculated by the following formulas:
Figure FDA0003295244930000021
wherein p isi(i is 1,2,3) is a feature vector corresponding to the RGB channels of the image, λi(i is 1,2,3) is the eigenvalue corresponding to the eigenvector, αi(i ═ 1,2,3) is a coefficient of perturbation for each eigenvalue, obtained by a gaussian function with a mean value of 1 and a standard deviation of 0.1.
4. The deep learning-based grading ring detection and identification method according to claim 1, wherein the step S2 specifically comprises:
generating a large number of candidate regions in the preprocessed grading ring image sample by using a visual method; performing feature extraction on each candidate region by using a convolutional neural network to form a high-dimensional feature vector; sending the obtained high-dimensional feature vector into a linear classifier, calculating the probability of belonging to each class, and judging the object contained in the probability; the position and size of the target peripheral frame are calculated by fine regression.
5. The deep learning-based grading ring detection and identification method according to claim 1, wherein the step S4 specifically comprises:
performing convolution operation on an input image to obtain a characteristic diagram;
generating a plurality of candidate region boxes on the feature map by using the region suggestion network;
grading and screening the content of the candidate region frames through a non-maximum suppression algorithm, and reserving the candidate region frames with higher scores according to a preset number;
and taking the features in the candidate region frame on the feature map to form a high-dimensional feature vector, calculating a category score by a detection network, and predicting a more appropriate target peripheral frame position.
6. The deep learning-based grading ring detection and identification method according to claim 5, wherein: in step S4, whether the candidate region box is the target region is determined by the classification function, and the target is obtained by the frame regression function.
7. The deep learning-based grading ring detection and identification method according to claim 1, wherein: before the step S3, all the weight values in the weight matrix are initialized with different small random numbers.
CN201810582294.1A 2018-06-07 2018-06-07 Grading ring detection and identification method based on deep learning Active CN109034184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810582294.1A CN109034184B (en) 2018-06-07 2018-06-07 Grading ring detection and identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810582294.1A CN109034184B (en) 2018-06-07 2018-06-07 Grading ring detection and identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109034184A CN109034184A (en) 2018-12-18
CN109034184B true CN109034184B (en) 2022-03-11

Family

ID=64612208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810582294.1A Active CN109034184B (en) 2018-06-07 2018-06-07 Grading ring detection and identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109034184B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614947A (en) * 2018-12-19 2019-04-12 深圳供电局有限公司 Power components identification model training method, device and computer equipment
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning
CN110321864A (en) * 2019-07-09 2019-10-11 西北工业大学 Remote sensing images explanatory note generation method based on multiple dimensioned cutting mechanism
CN110738179A (en) * 2019-10-18 2020-01-31 国家电网有限公司 electric power equipment identification method and related device
CN110991458B (en) * 2019-11-25 2023-05-23 创新奇智(北京)科技有限公司 Image feature-based artificial intelligent recognition result sampling system and sampling method
CN111598889B (en) * 2020-05-26 2023-08-08 南方电网数字电网科技(广东)有限公司 Identification method and device for inclination fault of equalizing ring and computer equipment
CN112651337A (en) * 2020-12-25 2021-04-13 国网黑龙江省电力有限公司电力科学研究院 Sample set construction method applied to training line foreign object target detection model
CN112884715A (en) * 2021-01-28 2021-06-01 华南理工大学 Composite insulator grading ring inclination fault detection method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517122A (en) * 2014-12-12 2015-04-15 浙江大学 Image target recognition method based on optimized convolution architecture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180088671A1 (en) * 2016-09-27 2018-03-29 National Kaohsiung University Of Applied Sciences 3D Hand Gesture Image Recognition Method and System Thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517122A (en) * 2014-12-12 2015-04-15 浙江大学 Image target recognition method based on optimized convolution architecture

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"ImageNet Classification with Deep Convolutional Neural Networks";Alex Krizhevsky .etc;《Advances in Neural Information Processing System 25,(NIPS 2012)》;20121231;全文 *
"深度学习在输电线路中部件识别与缺陷检测的研究";汤踊 等;《电子测量技术》;20180331;第41卷(第6期);第1-3章 *
汤踊 等."深度学习在输电线路中部件识别与缺陷检测的研究".《电子测量技术》.2018,第41卷(第6期), *

Also Published As

Publication number Publication date
CN109034184A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109034184B (en) Grading ring detection and identification method based on deep learning
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN106897673B (en) Retinex algorithm and convolutional neural network-based pedestrian re-identification method
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN107145846A (en) A kind of insulator recognition methods based on deep learning
CN107133943A (en) A kind of visible detection method of stockbridge damper defects detection
CN112668648B (en) Infrared and visible light fusion recognition method based on symmetrical fusion network
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN114972213A (en) Two-stage mainboard image defect detection and positioning method based on machine vision
CN113870263B (en) Real-time monitoring method and system for pavement defect damage
CN114782298B (en) Infrared and visible light image fusion method with regional attention
CN113538457B (en) Video semantic segmentation method utilizing multi-frequency dynamic hole convolution
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN114429457A (en) Intelligent fan blade defect detection method based on bimodal fusion
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN116704273A (en) Self-adaptive infrared and visible light dual-mode fusion detection method
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN110688976A (en) Store comparison method based on image identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant