CN113343918A - Power equipment identification method, system, medium and electronic equipment - Google Patents

Power equipment identification method, system, medium and electronic equipment Download PDF

Info

Publication number
CN113343918A
CN113343918A CN202110743981.9A CN202110743981A CN113343918A CN 113343918 A CN113343918 A CN 113343918A CN 202110743981 A CN202110743981 A CN 202110743981A CN 113343918 A CN113343918 A CN 113343918A
Authority
CN
China
Prior art keywords
model
yolo
convolution
identification method
equipment identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110743981.9A
Other languages
Chinese (zh)
Inventor
孙运涛
李明
赵斌超
井雨刚
李钦柱
许志元
李源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202110743981.9A priority Critical patent/CN113343918A/en
Publication of CN113343918A publication Critical patent/CN113343918A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a power device identification method, system, medium, and electronic device, acquiring an image to be identified; obtaining a positioning identification result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model; the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed; the method and the device enhance the detection capability of the model on the small target while achieving high real-time performance, and can better achieve real-time high-efficiency target detection on the embedded terminal equipment.

Description

Power equipment identification method, system, medium and electronic equipment
Technical Field
The present disclosure relates to the field of power device identification technologies, and in particular, to a power device identification method, system, medium, and electronic device.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
In an electric power scene, electric power equipment is easily influenced by illumination change, angle change, partial shielding, deformation, blurring and background interference, the electric power equipment is complex in structure, various in types and high in similarity of equipment of the same type, and the indoor and outdoor complex environment where the electric power equipment is located causes great difficulty in identification, so that target identification becomes very challenging.
The inventor finds that most of the existing methods are performed by adopting a deep learning method, such as power equipment identification based on a convolutional neural network model, power equipment identification based on a long-short term memory network and the like, but most of the existing deep learning methods have complex structures, large parameter quantities and large occupied resources, and are difficult to meet the real-time requirement in power equipment identification.
Disclosure of Invention
In order to overcome the defects of the prior art, the present disclosure provides a method, a system, a medium and an electronic device for identifying an electrical device, so as to realize more accurate identification of the electrical device and improve the real-time performance of detection.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
the first aspect of the present disclosure provides a power device identification method.
A power device identification method, comprising the process of:
acquiring an image to be identified;
obtaining a positioning identification result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
Further, in the YOLO-V3 model, cross entropy loss is used as a loss function, and logistic regression is used to perform target confidence calculation and category prediction.
Further, the convolution operation is divided into deep convolution and point convolution by the depth separable convolution structure, different convolution kernels are adopted for different input channels for convolution by the deep convolution, and integration of the depth convolution output characteristic diagram is completed through the point convolution.
Further, the YOLO-V3 model comprises a feature fusion submodule and a channel attention submodule, wherein the feature fusion submodule acquires global spatial information of features by combining channel information; the attention module integrates the global information of each channel to generate a nonlinear relationship between the channels.
Further, the upsampling layer of the YOLO-V3 model organically combines the large resolution feature map with the small resolution feature map using two upsamplings.
Furthermore, in the YOLO-V3 model, a K-means clustering method is adopted to train the bounding box.
Further, in the YOLO-V3 model, a binary cross entropy loss prediction class is used.
A second aspect of the present disclosure provides a power device identification system.
A power device identification system, comprising:
a data acquisition module configured to: acquiring an image to be identified;
a positioning module configured to: obtaining a positioning result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
A third aspect of the present disclosure provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the steps in the power device identification method according to the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device, including a memory, a processor, and a program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps in the power device identification method according to the first aspect of the present disclosure.
Compared with the prior art, the beneficial effect of this disclosure is:
1. according to the method, the system, the medium and the electronic equipment, the standard convolution structure in the YOLO-V3 model basic network Darknet-53 is replaced by the depth separable convolution structure, the full connection layer and the Softmax layer of the Darknet-53 are removed, the detection capability of the model on small targets is enhanced while the real-time performance is high, and the real-time and efficient target detection can be better realized on the embedded terminal equipment.
2. According to the method, the system, the medium and the electronic equipment, the convolution operation is divided into the depth convolution and the point convolution by the depth separable convolution structure, different convolution kernels are adopted for different input channels for convolution by the depth convolution, integration of the depth convolution output characteristic diagram is completed through the point convolution, and the defect that all channels need to be operated by any convolution kernel in a common convolution layer is overcome.
3. According to the method, the system, the medium and the electronic equipment, the YOLO-V3 model comprises a feature fusion submodule and a channel attention submodule, and the feature fusion submodule acquires global space information of features by combining channel information; the attention module integrates the global information of each channel to generate a nonlinear relation between the channels, so that the influence caused by local features is reduced, the features between the channels can be calibrated, and the local feature expression capability of the spatial domain information can be improved.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a schematic flowchart of an electrical equipment identification method provided in embodiment 1 of the present disclosure.
Fig. 2 is a schematic diagram of a bounding box prediction provided in embodiment 1 of the present disclosure.
Fig. 3 is a graph of accuracy variation for different training steps provided in embodiment 1 of the present disclosure.
Fig. 4 is a Loss plot for different training rates provided in example 1 of the present disclosure.
Fig. 5 is a graph of measured effects at different training rates provided in embodiment 1 of the present disclosure.
Fig. 6 is a sample graph of the classification, positioning and identification effects of the external insulation equipment at the training rate of 0.005 according to embodiment 1 of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
Example 1:
as shown in fig. 1, an embodiment 1 of the present disclosure provides an electrical device identification method, including the following processes:
acquiring an image to be identified;
obtaining a positioning identification result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
In this embodiment, in the YOLO-V3 model, cross entropy loss is used as a loss function, and logistic regression is used to perform target confidence calculation and category prediction.
In this embodiment, the depth separable convolution structure divides the convolution operation into a depth convolution and a point convolution, the depth convolution performs convolution on different input channels by using different convolution kernels, and the integration of the depth convolution output characteristic diagram is completed by the point convolution.
In this embodiment, the YOLO-V3 model includes a feature fusion submodule and a channel attention submodule, where the feature fusion submodule obtains global spatial information of features by merging channel information; the attention module integrates the global information of each channel to generate a nonlinear relationship between the channels.
In this embodiment, the upsampling layer of the YOLO-V3 model organically combines the large-resolution feature map and the small-resolution feature map by using two upsampling processes.
In this embodiment, in the YOLO-V3 model, a K-means clustering method is used to train the bounding box.
In this embodiment, a binary cross entropy loss prediction class is used in the YOLO-V3 model.
Specifically, the method comprises the following steps:
s1: basic structure of YOLO-V3 model
The YOLO-V3 model can be specifically divided into a 106-layer complete convolution architecture, and comprises a conv layer, a BN layer, a shortcut layer, a route layer, an upsample layer and a YOLO layer.
Wherein, the shortcut layer uses the residual error structure of the resnet for reference; the route layer is a routing layer, indexing to the previous feature map. The upsamplie is a bilinear upsampling layer; the yolo layer is a feature map resolution layer.
Of the convolutional layers, 1 × 1 and 3 × 3 filters are mainly used, the 3 × 3 convolutional layer is used to reduce the width and height and increase the number of channels, and the 1 × 1 convolutional layer is used for compression characterization. The complexity of the network architecture is often accompanied by the dual tests of the model training difficulty and the convergence speed. Therefore, the YOLO-V3 selects the shortcut layer to greatly reduce the training difficulty and improve the training accuracy on the basis of the complex underlying structure. And cross-layer connection is realized through a route layer, and fusion of a plurality of different characteristics is promoted and the characteristics are learned together. The upsampling layer uses two upsamplings to organically combine the large resolution feature map with the small resolution feature map to enhance the identification of small objects. And finally, outputting the coordinates and the category of the prediction object through the yolo layer.
The YOLO-V3 algorithm has incomparable advantages in the rapidity and the accuracy of the target identification process, and mainly comes from network characteristics and a using method thereof. Its network identification features and methods will be further described:
(1) end-to-end training
End-to-end training is one of the important features of YOLO-V3 to distinguish from other methods, focusing only on the input and output ends. For any convolution network, only an image needs to be input through a loss function, training is carried out through the network, and finally a prediction image is output to realize end-to-end processing, so that detection is completed, and the effect of greatly increasing the speed is achieved.
(2) Dimension clustering
Conventional algorithms typically use manual selection boxes, but manual selection results in reduced accuracy. To better select the previous network, YOLO-V3 inherits the method of YOLO-V2 to compute anchor bounding boxes and uses the K-means clustering method to train bounding boxes. The method uses the IOU score as a final evaluation standard, and selects 9 anchor points to predict the bounding box based on the average IOU, so that the improvement of the precision is realized. The distance function formula used for clustering is represented as (5.1) and the Equation of Equation Chapter (Next) Section 1
d(box,centroid)=1-IOU(box,centroid) (1.1)
(3) Calculation of bounding boxes
The detection effect of the traditional bounding box prediction method needs to be optimized, so that a K-means method in dimension clustering is used for predicting the bounding box, and FIG. 2 is a schematic diagram of the bounding box prediction.
When an image is input, firstly, a target is selected in a network to determine a target central point, then the input image is divided into cells with equal sizes, and the coordinate position of the cell where the central point is located is calculated. And calculating the predicted boundary frame according to the coordinate of the unit where the central point is located and the coordinate of the central point. The coordinate formula is shown as formula (1):
Figure BDA0003142239310000071
wherein the coordinate of the center point is (t)x,ty,tw,th) Coordinates representing a center point of the bounding box; (p)w,ph) Indicates the width and height of the division, (c)x,cy) Indicating the coordinate offset.
(4) Confidence calculation
There are two factors that can help calculate confidence. First, whether the region has a prediction target. If the target to be measured is to be measured, set to 1; otherwise, it is set to 0; the size of the IOU is then calculated. The confidence prediction is as shown in equation (2):
Figure BDA0003142239310000072
(5) category prediction
Since the prediction box may contain multiple categories, the softmax function puts each prediction box into a category. Therefore, to solve the problem of possible overlap of multiple tags, a binary cross-entropy loss prediction class is used instead of using the softmax function as output. The cross entropy formula is shown in formula (3):
Figure BDA0003142239310000081
where c represents the value of the cross-entropy loss, n represents the number of network layers, X represents the input vector of the network layers, y represents the actual network output value,
Figure BDA0003142239310000082
representing the network forecast.
S2: YOLO-V3 algorithm implementation and result analysis
The embodiment focuses on algorithm implementation, a training process, a test result and detailed analysis of the external insulation equipment positioning identification based on YOLO-V3. For the constructed sample library, the test set and the training set are set as 9: 1, training and testing the sample library.
Unsupervised training and learning are carried out through a YOLO-V3 algorithm, and finally an external insulation equipment positioning recognition model is established.
When an image is input, the size is first reset and the features in the feature extraction layer are extracted. Then enters the process output layer, and the output size is 13 x 24 when the first time is output at the 82 th layer. The convolution, upsampling and feature fusion operations were performed a second time at layer 94, with a size of 26 x 24. Similarly, the output size in layer 106 is 52 x 24. In this algorithm, tensor represents the network tensor size, as shown in equation (4):
tensor=N*N[(bounding box)*(offset+object+class)] (4)
where N is 1, bounding box is 3, offset is 4, and object is 1. Class is 3 because Class is the number of output classes.
In order to find the optimal model for positioning and identifying the external insulation equipment, the aim of training model optimization is achieved by setting basic learning rate parameters of model training. The basic training step length is respectively taken as five different values of 0.0001, 0.0005, 0.001, 0.005 and 0.01 in the training process.
As can be seen from FIG. 3, the accuracy of YOLO-V3 is directly affected by different training rates, and the model training with the training rate of 0.005 can achieve fast convergence and achieve an accuracy of 99.73%. With the decrease of the training rate, the convergence speed of the model becomes slow, the stability of the model becomes poor, the accuracy curve begins to vibrate, when the accuracy curve is decreased to 0.0001, convergence fails due to overfitting, and the accuracy stays at about 60%. Through the comparison of the five types of learning rate training, when the training rate is selected to be about 0.05, the model training effect is optimal. However, this does not mean that the training rate is increased at a glance, and the training process can be optimized, and experiments show that when the training rate is increased to 0.01 or above, the model cannot be converged in the training process due to an excessively large value of the training rate, so that the training fails. So in summary, a high learning rate means that more steps are in the weight update section, so it may take less time to converge on the model for the optimal weights; however, the learning rate is too high, which may result in the training step spanning too large to reach the optimal point.
The YOLO-V3 curve described in this embodiment uses cross entropy loss as a loss function of the classification and identification model of the external insulation equipment, and LR regression is used for the target confidence and class prediction of V3. As shown in fig. 4, in the case of taking a proper training rate of 0.05, the loss curve converges rapidly and tends to be stable, while a lower training rate results in an inefficient convergence of the loss curve, which results in a slower convergence rate and a lower model accuracy.
S3: application testing and effect comparison
The detection effect of the field application of the model at different training rates is further explained by taking the detection of the suspension type external insulation equipment as an example. As shown in fig. 5, when the training rates are 0.0001 and 0.01, the target external insulation device cannot be detected because the model training fails; when the training rate is 0.0005, the training model can only identify three external insulation devices in the target pattern, and the identification degree is 76.73%, so that the identification effect is poor; when the training rate of 0.001 is selected, only one external insulation device is too close to the railing, the background interference is large, the external insulation device is not detected, and the recognition effect is to be improved. When the training rate is 0.005, all the suspended external insulation devices in the graph are successfully detected, and the average recognition degree of the detected external insulation devices is more than 88.25%, so that the recognition effect is better.
As shown in fig. 6, a sample diagram of the classification, positioning and recognition effects of the external insulation equipment under the optimal training model is shown. The diagram shows that the YOLO-V3 model has good performance in the identification and detection of the external insulation equipment, the three types of suspension insulators, post insulators and insulation sleeves can be detected by the established classification and identification model of the external insulation equipment, the identification degree is more than 88%, and the method has certain industrial application value.
Example 2:
an embodiment 2 of the present disclosure provides an electrical device identification system, including:
a data acquisition module configured to: acquiring an image to be identified;
a positioning module configured to: obtaining a positioning result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
Example 3:
the embodiment 3 of the present disclosure provides a computer-readable storage medium on which a program is stored, which when executed by a processor, implements the steps in the power device identification method according to the embodiment 1 of the present disclosure.
Example 4:
the embodiment 4 of the present disclosure provides an electronic device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and when the processor executes the program, the steps in the power device identification method according to embodiment 1 of the present disclosure are implemented.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. An electric power equipment identification method is characterized in that: the method comprises the following steps:
acquiring an image to be identified;
obtaining a positioning identification result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
2. The electrical equipment identification method according to claim 1, characterized in that:
in the YOLO-V3 model, cross entropy loss is used as a loss function, and logistic regression is used for target confidence calculation and category prediction.
3. The electrical equipment identification method according to claim 1, characterized in that:
the depth separable convolution structure divides convolution operation into depth convolution and point convolution, the depth convolution adopts different convolution kernels to carry out convolution on different input channels, and integration of a depth convolution output characteristic diagram is completed through the point convolution.
4. The electrical equipment identification method according to claim 1, characterized in that:
the YOLO-V3 model comprises a feature fusion submodule and a channel attention submodule, wherein the feature fusion submodule acquires global spatial information of features by combining channel information; the attention module integrates the global information of each channel to generate a nonlinear relationship between the channels.
5. The electrical equipment identification method according to claim 1, characterized in that:
the upsampling layer of the YOLO-V3 model organically combines the large resolution feature map with the small resolution feature map using two upsamplings.
6. The electrical equipment identification method according to claim 1, characterized in that:
in the YOLO-V3 model, a K-means clustering method is adopted to train a bounding box.
7. The electrical equipment identification method according to claim 1, characterized in that:
in the YOLO-V3 model, binary cross entropy loss prediction is adopted.
8. An electrical equipment identification system, characterized by: the method comprises the following steps:
a data acquisition module configured to: acquiring an image to be identified;
a positioning module configured to: obtaining a positioning result of the electric power external insulation equipment according to the obtained image and a preset convolutional neural network model;
the preset convolutional neural network model adopts a YOLO-V3 model, a standard convolutional structure in a YOLO-V3 model basic network Darknet-53 is replaced by a depth separable convolutional structure, and a full connection layer and a Softmax layer of the Darknet-53 are removed.
9. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the electrical device identification method according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the power device identification method according to any one of claims 1-7 when executing the program.
CN202110743981.9A 2021-06-30 2021-06-30 Power equipment identification method, system, medium and electronic equipment Pending CN113343918A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110743981.9A CN113343918A (en) 2021-06-30 2021-06-30 Power equipment identification method, system, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110743981.9A CN113343918A (en) 2021-06-30 2021-06-30 Power equipment identification method, system, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113343918A true CN113343918A (en) 2021-09-03

Family

ID=77482222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110743981.9A Pending CN113343918A (en) 2021-06-30 2021-06-30 Power equipment identification method, system, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113343918A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113884827A (en) * 2021-09-28 2022-01-04 华北电力大学(保定) Insulator ultraviolet fault diagnosis method and device based on YOLO

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
CN110992307A (en) * 2019-11-04 2020-04-10 华北电力大学(保定) Insulator positioning and identifying method and device based on YOLO

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
CN110992307A (en) * 2019-11-04 2020-04-10 华北电力大学(保定) Insulator positioning and identifying method and device based on YOLO

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张陶宁: "基于改进YOLOv3模型的快速目标检测算法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113884827A (en) * 2021-09-28 2022-01-04 华北电力大学(保定) Insulator ultraviolet fault diagnosis method and device based on YOLO

Similar Documents

Publication Publication Date Title
CN110070141B (en) Network intrusion detection method
CN106570453B (en) Method, device and system for pedestrian detection
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN111428733B (en) Zero sample target detection method and system based on semantic feature space conversion
CN108710913A (en) A kind of switchgear presentation switch state automatic identification method based on deep learning
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN109858547A (en) A kind of object detection method and device based on BSSD
CN111091101B (en) High-precision pedestrian detection method, system and device based on one-step method
CN115731164A (en) Insulator defect detection method based on improved YOLOv7
WO2019026104A1 (en) Information processing device, information processing program, and information processing method
CN113221787A (en) Pedestrian multi-target tracking method based on multivariate difference fusion
CN111368634B (en) Human head detection method, system and storage medium based on neural network
CN115272652A (en) Dense object image detection method based on multiple regression and adaptive focus loss
CN110969200A (en) Image target detection model training method and device based on consistency negative sample
CN114565842A (en) Unmanned aerial vehicle real-time target detection method and system based on Nvidia Jetson embedded hardware
CN111444865A (en) Multi-scale target detection method based on gradual refinement
CN115147745A (en) Small target detection method based on urban unmanned aerial vehicle image
CN110008899A (en) A kind of visible remote sensing image candidate target extracts and classification method
WO2018142816A1 (en) Assistance device and assistance method
CN116912796A (en) Novel dynamic cascade YOLOv 8-based automatic driving target identification method and device
CN114299036B (en) Electronic element detection method and device, storage medium and electronic equipment
US20230005237A1 (en) Parameter determination apparatus, parameter determination method, and non-transitory computer readable medium
Fu et al. A case study of utilizing YOLOT based quantitative detection algorithm for marine benthos
CN113343918A (en) Power equipment identification method, system, medium and electronic equipment
JP2014010633A (en) Image recognition device, image recognition method, and image recognition program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication