CN115841608A - Multi-chamber lightning arrester identification method based on improved YOLOX - Google Patents

Multi-chamber lightning arrester identification method based on improved YOLOX Download PDF

Info

Publication number
CN115841608A
CN115841608A CN202211364891.XA CN202211364891A CN115841608A CN 115841608 A CN115841608 A CN 115841608A CN 202211364891 A CN202211364891 A CN 202211364891A CN 115841608 A CN115841608 A CN 115841608A
Authority
CN
China
Prior art keywords
loss
yolox
features
improved
cls
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211364891.XA
Other languages
Chinese (zh)
Inventor
刚永明
田大鹏
庞伟生
马明忠
王东
刘权琦
赵中奇
李小晖
马立群
张海玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongquan Technology Qinghai Co ltd
Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Qinghai Electric Power Co Ltd
Original Assignee
Hongquan Technology Qinghai Co ltd
Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Qinghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongquan Technology Qinghai Co ltd, Haibei Power Supply Company State Grid Qinghai Electric Power Co ltd, State Grid Corp of China SGCC, State Grid Qinghai Electric Power Co Ltd filed Critical Hongquan Technology Qinghai Co ltd
Priority to CN202211364891.XA priority Critical patent/CN115841608A/en
Publication of CN115841608A publication Critical patent/CN115841608A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of multi-chamber lightning arrester identification, in particular to a multi-chamber lightning arrester identification method based on improved YOLOX, which comprises the following steps: s1, marking a multi-chamber lightning arrester in an image picture by using a Labelme tool; s2, improving a YOLOX algorithm, replacing DarkNet53 with a backbone network with a larger receptive field, ensuring the high speed of the model by using deep separable convolution, increasing the information interactivity among the characteristics of the model by using channel out-of-order operation in a spatial pyramid pooling module, enhancing the characteristic fusion capability, and simultaneously increasing the rotating frame detection thought to reduce the interference of the background in the identification result; and S3, detecting the rotating target by using a YOLOX target detection algorithm. The invention not only improves the detection precision of the identification of the multi-chamber lightning arrester in the power system, but also improves the capability of learning the characteristics of the multi-chamber lightning arrester, and also adds the detection idea of a rotating frame to reduce the background interference in the identification result.

Description

Multi-chamber lightning arrester identification method based on improved YOLOX
Technical Field
The invention relates to the technical field of multi-chamber lightning arrester identification, in particular to a multi-chamber lightning arrester identification method based on improved YOLOX.
Background
In an electric power system, the safety of a transmission line is the primary factor of normal operation of a power grid, so that the power grid needs to be regularly patrolled. The transmission line is generally erected at high altitude by a tower for safety and stability. The multi-chamber arrester is connected between the mains and the ground and serves to protect the electrical equipment from high transient overvoltages and to limit the duration. In the daily operation of a power system, the lightning arrester is required to be regularly checked and maintained, so that the normal work of the lightning arrester is ensured, and the operation fault of a power line is avoided. The conventional manual inspection mode requires a large amount of manpower and material resources and is inefficient. At present, the unmanned aerial vehicle is used to replace manual operation, so that the working mode can be simplified, the risk of aerial operation is reduced, the 'machine patrol is main, the' machine patrol + human patrol 'operation and maintenance mode of human patrol as auxiliary' is basically formed, and the operation and maintenance level of the power transmission line is continuously improved.
At present, the mainstream lightning arrester identification method is online detection based on vision by means of an unmanned aerial vehicle, and the adopted methods mainly include a spark gap method, a thermal infrared imager method, a small ball discharge method, a leakage current detection method, a laser Doppler vibration method and the like. Through computer data analysis and processing, the efficiency of lightning arrester identification can be greatly improved, and the traditional ground manual inspection is gradually replaced in many advanced countries.
In practical application, in view of the characteristics of complex line background, different installation positions, numerous types and numbers of multi-chamber lightning arresters and the like, the identification equipment has a great deal of difficulty. In addition, a large amount of manually designed features are needed for feature extraction in identification, features of different research objects are different, and the features have diversity, such as: SIFT, HOG, LBP, etc. Manually selecting features is time consuming and labor intensive, requires heuristic expertise, and relies to a large extent on experience and luck. The existing models belong to shallow learning, such as SVM, boosting, LR and the like, the representation capability of complex functions is limited under the condition of limited samples and computing units, the generalization capability of the models is limited aiming at the problem of complex classification, and the training process of some models, such as artificial neural network (BP), is easy to fall into the situation of local minimum values.
Therefore, the invention provides a multi-chamber lightning arrester identification method based on improved YOLOX, which can provide important data support for lightning arrester detection.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a multi-chamber lightning arrester identification method based on improved YOLOX.
A multi-chamber arrester identification method based on improved YOLOX, comprising the steps of:
s1, marking a multi-chamber lightning arrester in an image picture by using a Labelme tool;
s2, improving a YOLOX algorithm, replacing DarkNet53 with a backbone network with a larger receptive field, ensuring the high speed of the model by using depth separable convolution, increasing the information interactivity among the characteristics of the model by using channel out-of-order operation in a spatial pyramid pooling module, enhancing the characteristic fusion capability, and simultaneously increasing the rotating frame detection thought and reducing the background interference in the identification result;
s3, detecting the rotating target by using a YOLOX target detection algorithm, wherein the specific improvement is as follows:
s3-1: the detection of the horizontal rectangular frame needs to obtain x, y, w and h of the frame, that is, a horizontal rectangular frame can be represented, but for the rotating rectangular frame, a rotation angle θ of the frame is also needed, that is, a branch needs to be added to an output part of a detection head of the network for obtaining the rotation angle θ, and a coding mode of a circular smooth label CSL is adopted to classify angles of the frame, and the expression is as follows:
Figure BDA0003923591470000031
in the formula, theta represents the rotation angle of the current real frame, r represents the window radius, the default value of the radius is 6, g (x) represents the window function, and the window function is expressed as follows:
Figure BDA0003923591470000032
and the window function satisfies periodicity and symmetry, and the corresponding expressions are respectively and sequentially as follows:
g(x)=g(x+KT),K∈N,T=180/ω,
0≤g(θ+ε)=g(θ-ε)≤1,|ε|<r;
where the mean μ =0, the variance δ =4, n is a set of natural numbers;
s3-2: there are two points in YOLOX that need to calculate the loss function, the loss is calculated for the first time to construct the cost matrix for label assignment, the loss function is as follows, this time, the calculated loss is only for screening positive and negative samples:
Cost=L cls +λL reg +L 1
Figure BDA0003923591470000033
fractional loss L in formula cls Using a binary cross-entropy loss function, the regression loss L reg Using an iou loss function, λ is used to control the ratio of the two loss weights, λ defaults to 3, where L 1 Equivalently, more prior information is added, and the matching degree of high-quality prior is increased;
the second calculated loss is used to optimize the model, and its loss function is as follows:
Loss=L cls +λL reg +L conf
in the formula, the classification loss L is included cls Confidence loss L conf And regression loss L reg The classification loss and the confidence loss adopt a binary cross entropy loss function, lambda is used for controlling the regression loss weight ratio, defaults to 5, and simultaneously exchanges the true label of the confidence with the classified true label, and because the improved algorithm output branch has more branches of the rotation angle classification, the loss function in YOLOX needs to be improved:
the improved cost function during label distribution is as follows:
Cost=L cls +λL reg +L 1 +L θ-cls
the theta-cls is used for calculating angle classification loss, and the loss of the angle is calculated in a sigmoid combined binary cross entropy mode;
the loss function improvement during model training is as follows:
Loss=L cls +λL reg +L conf +L θ-cls
the optimized loss function is that the classification loss of the angle is added on the basis of the original loss, and the loss of the angle is calculated in the mode of combining sigmoid with binary cross entropy.
As a preferred scheme of the present invention, the labeling mode for the experimental data in S1 is divided into two modes, namely rectangular frame labeling and rotational rectangular frame labeling.
As a preferred embodiment of the present invention, the step S2 includes the following substeps:
s2-1: in order to improve the model feature learning ability, the original backbone network in YOLOX is improved by using the base unit and the down-sampling unit in the context module with larger receptive field, which is as follows:
firstly, extracting features from input features through a depth separable convolution with the size of 7 multiplied by 7, wherein the larger the convolution kernel is, the wider the learned feature region is, then using 2 common convolution operation features of 1 multiplied by 1 to raise and lower dimensions, acquiring more abstract semantic information, and finally adding and fusing the semantic information with the features of a residual mapping branch circuit;
the down-sampling operation can reduce the resolution of input features and reduce the complexity of model calculation, 1 depth separable convolution of 3 multiplied by 3 is added in the residual mapping, and the step length is set to be 2 by the depth separable convolution of 7 multiplied by 7, so that the resolution of the features is reduced by one time and the number of channels is increased by one time;
s2-2: the ConvNet module is used for improving a backbone network structure after a feature extraction network, the network input size is 640 multiplied by 3, in a first-level feature, 1 basic unit and 1 downsampling unit are used for learning shallow features, the resolution of the features is reduced by one time, and the channel dimension is increased by one time;
s2-3: firstly, using 4 groups of maximum pooling operation with different sizes for input features to obtain 4 groups of feature compression vectors, in order to reduce the parameter number of modules, simultaneously using 1 × 1 convolution operation to compress the channel dimension of each group of features by 4 times in order to ensure that the feature dimension of each group of features is consistent with the input during splicing, then using up-sampling to restore the feature resolution, combining the 4 groups of features in the channel dimension, adding and fusing the combined result and the original input features, and finally using channel disorder operation to reorder the final result in the channel dimension in order to increase the information transmissibility between the features.
In a preferred embodiment of the present invention, the rotation angle θ in S3 is in the range of 0 ° ≦ θ < 180 °.
Compared with the prior art, the invention has the beneficial effects that:
the method combines a YOLOX target detection algorithm with a rotating frame detection algorithm, improves the detection accuracy of multi-chamber lightning arrester identification in the power system, replaces a backbone network of the YOLOX with ConvNext with a larger receptive field, and improves the capability of learning the characteristics of the multi-chamber lightning arrester; and enhancing feature fusion by using channel out-of-order operation in a spatial pyramid pooling module, and adding a rotating frame detection idea to reduce background interference in the identification result.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying a multi-chamber lightning arrester based on improved YOLOX according to the present invention;
fig. 2 is a schematic diagram of two labeling ways of a multi-chamber lightning arrester identification method based on improved YOLOX provided by the invention;
fig. 3 is a schematic diagram of an original backbone network in a modified YOLOX of a basic unit and a downsampling unit in a ConvNext module of the multi-chamber arrester identification method based on the modified YOLOX provided by the present invention;
fig. 4 is a schematic structural diagram of an improved SPP based on the improved YOLOX multi-chamber arrester identification method;
fig. 5 is a CSL coded tag score diagram of a multi-chamber lightning arrester identification method based on improved YOLOX according to the present invention.
Detailed Description
The present invention will be further illustrated with reference to the following specific examples.
Referring to fig. 1-5, a multi-chamber arrester identification method based on improved YOLOX includes the following steps:
s1, marking a multi-chamber lightning arrester in an image picture by using a Labelme tool;
the aerial photography image comprises 2000 training sets and 500 test sets, and for an inclined target, if a horizontal rectangular frame is used for detecting the inclined target, the horizontal rectangular frame contains a large amount of background information, so that the experimental data marking is divided into two modes of rectangular frame marking and rotating rectangular frame marking, and the effect is shown in fig. 2, wherein fig. 2-a is a rectangular frame marking result and contains more backgrounds, and fig. 2-b is a rotating frame marking result;
in view of the fact that image data is difficult to obtain, labeling work is complex and time-consuming, and meanwhile, the neural network model needs more data to be fitted in the process of learning target features, data enhancement processing is conducted on a training set; the multi-chamber lightning arrester has direction characteristic invariance in spatial position, scale changeability in shape and sheltered phenomenon in shooting angle; the training data is expanded to 3000 pieces by adopting a data enhancement method of mirror image enhancement, multi-scale scaling and random erasure;
s2, improving a YOLOX algorithm, replacing DarkNet53 with a backbone network with a larger receptive field, ensuring the high speed of the model by using deep separable convolution, increasing the information interactivity among the characteristics of the model by using channel out-of-order operation in a spatial pyramid pooling module, enhancing the characteristic fusion capability, simultaneously increasing the detection thought of a rotating frame, and reducing the interference of the background in the identification result, wherein the step comprises the following substeps:
s2-1: in order to improve the model feature learning capability, the original backbone network in YOLOX is improved by using a Base Unit (BU) and a Down Sample Uint (DSU) in the context module with a larger receptive field, and the structure is as shown in fig. 3 below, specifically as follows:
fig. 3-a shows a BU module, where the input features are first extracted by a depth separable convolution with a size of 7 × 7, the larger the convolution kernel is, the wider the learned feature region is, then 2 common convolution operations with a size of 1 × 1 are used to increase and decrease dimensions of features to obtain more abstract semantic information, and finally the extracted semantic information is added and fused with features of a residual mapping branch;
3-b illustrate DSU modules that can reduce input feature resolution while reducing model computation complexity by downsampling operations, adding 1 depth separable convolution of 3 x 3 to the residual map, while setting the step size to 2 with a depth separable convolution of 7 x 7 to reduce feature resolution by a factor of two and increase the number of channels by a factor of two;
s2-2: the structure of the backbone network after the network is extracted by using the ConvNet module to improve the features is shown in the following table, the network input size is 640 multiplied by 3, in the first-level features, 1 BU module and 1 DSU module are used for learning shallow features, the resolution of the features is reduced by one time, and the channel dimension is increased by one time, and similarly, in the second-level features, 2 BU modules and 1 DSU module learning features are used, in the third-level features, 5 BU modules and 1 DSU module learning features are used, in the fourth-level features, 2 BU modules and 1 DSU module learning features are used, in the fifth-level features, 1 BU module and 1 DSU module learning features are used, and finally, the improved SPP module is used for acquiring multi-scale features;
Figure BDA0003923591470000081
Figure BDA0003923591470000091
s2-3: the SPP module compresses the features to a fixed size by using multiple groups of maximum pooling operations, solves the problem of fixed image input size caused by a model structure, and introduces multi-scale features for the model. The invention has made the adaptability to SPP, its structure is as shown in figure 4, use 4 groups of largest pooling operation of different size to input the characteristic at first, get 4 groups of characteristic compressed vectors, in order to reduce the parameter quantity of the module, in order to guarantee characteristic dimension and input unanimity when each group of characteristic is spliced at the same time, use 1 x 1 convolution operation to compress its channel dimension 4 times, then use the upper sampling to resume the characteristic resolution, and merge 4 groups of characteristics in the channel dimension, merge the result and original input characteristic to add and fuse, in order to increase the information transmissibility among the characteristics, use the channel to disorder the operation, reorder the final result in the channel dimension;
s3, detecting the rotating target by using a YOLOX target detection algorithm, wherein the specific improvement is as follows:
s3-1: the detection of the horizontal rectangular frame needs to obtain x, y, w and h of the frame, namely, the horizontal rectangular frame can be represented, but the rotation angle theta of the frame is also needed for the rotating rectangular frame, namely, a branch is required to be added to an output part of a detection head of a network for obtaining the rotation angle theta, and the rotating rectangular frame is defined by a long-edge representation method, namely, the format of the x, y, w, h and theta (theta is more than or equal to 0 degree and less than 180 degrees). The angle theta is obtained by adopting a classification task instead of a regression task because the periodicity of the angle causes interference to the regression result, but the angle theta is obtained by adopting the classification task, but the angle of the frame can not be classified by adopting simple one-hot coding due to the characteristic of the periodicity of the angle, but adopting a coding mode of a Circular Smooth Label (CSL), and the expression is as follows:
Figure BDA0003923591470000101
in the formula, theta represents the rotation angle of the current real frame, r represents the window radius, the default value of the radius is 6, g (x) represents the window function, and the window function is expressed as follows:
Figure BDA0003923591470000102
and the window function satisfies periodicity and symmetry, and the corresponding expressions are respectively shown as follows in sequence:
g(x)=g(x+KT),K∈N,T=180/ω,
0≤g(θ+ε)=g(θ-ε)≤1,|ε|<r;
in the formula, the mean value mu =0, the variance delta =4, N is a natural number set, epsilon also represents an angle, 0 is not less than g (theta + epsilon) = g (theta-epsilon) ≦ 1, and the formula of | < epsilon | < r represents that a window function meets the symmetry;
when the real θ is 0 ° or 90 °, the corresponding CSL coded tag scores are respectively shown in the left and right diagrams of fig. 5, wherein the horizontal axis represents the angle value and the vertical axis represents the coded tag score;
s3-2: there are two points in YOLOX that need to calculate the loss function, the loss is calculated for the first time to construct the cost matrix for label assignment, the loss function is as follows, this time, the calculated loss is only for screening positive and negative samples:
Cost=L cls +λL reg +L 1
Figure BDA0003923591470000111
fractional loss L in formula cls Using a binary cross-entropy loss function, the regression loss L reg Using an iou loss function, λ is used to control the ratio of the two loss weights, λ defaults to 3, where L 1 Equivalently, adding more prior information, and increasing the matching degree of high-quality prior (the central point of the grid is within the range of the real frame or within 2.5 grids away from the center of the real frame);
the second calculated loss is used to optimize the model, and its loss function is as follows:
Loss=L cls +λL reg +L conf
in the formula, the classification loss L is included cls Confidence loss L conf And regression loss L reg Wherein, the classification loss and the confidence loss adopt a binary cross entropy loss function, lambda is used for controlling the weight ratio of the regression loss, the default is 5, and simultaneously, the real label of the confidence and the classified real label are simultaneously usedIn exchange, since the improved algorithm output branches are more branches of the rotation angle classification, the loss function in YOLOX needs to be improved:
the improved cost function in label assignment is as follows:
Cost=L cls +λL reg +L 1 +L θ-cls
the theta-cls is used for calculating angle classification loss, and the loss of the angle is calculated in a sigmoid and binary cross entropy combined mode;
the loss function improvement during model training is as follows:
Loss=L cls +λL reg +L conf +L θ-cls
the optimized loss function is that the classification loss of the angle is added on the basis of the original loss, and the loss of the angle is calculated in the mode of combining sigmoid with binary cross entropy.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. A multi-chamber arrester identification method based on improved YOLOX, characterized by comprising the following steps:
s1, marking a multi-chamber lightning arrester in an image picture by using a Labelme tool;
s2, improving a YOLOX algorithm, replacing DarkNet53 with a backbone network with a larger receptive field, ensuring the high speed of the model by using deep separable convolution, increasing the information interactivity among the characteristics of the model by using channel out-of-order operation in a spatial pyramid pooling module, enhancing the characteristic fusion capability, and simultaneously increasing the rotating frame detection thought to reduce the interference of the background in the identification result;
s3, detecting the rotating target by using a YOLOX target detection algorithm, wherein the specific improvement is as follows:
s3-1: the detection of the horizontal rectangular frame needs to obtain x, y, w and h of the frame, that is, one horizontal rectangular frame can be represented, but for the rotating rectangular frame, a rotation angle θ of the frame is also needed, that is, a branch needs to be added to an output part of a detection head of a network to obtain the rotation angle θ, and a task of classifying angles of the frame is performed by adopting an encoding mode of a circular smooth label CSL, wherein the expression is as follows:
Figure FDA0003923591460000011
in the formula, theta represents the rotation angle of the current real frame, r represents the window radius, the default value of the radius is 6, g (x) represents a window function, and the window function is expressed as follows:
Figure FDA0003923591460000012
and the window function satisfies periodicity and symmetry, and the corresponding expressions are respectively and sequentially as follows:
g(x)=g(x+KT),K∈N,T=180/ω,
0≤g(θ+ε)=g(θ-ε)≤1,|ε|<r;
where the mean μ =0, the variance δ =4, n is a set of natural numbers;
s3-2: there are two points in YOLOX that need to calculate the loss function, the loss is calculated for the first time to construct the cost matrix for label assignment, the loss function is as follows, and this time, the calculated loss is only for screening positive and negative samples:
Cost=L cls +λL reg +L 1
Figure FDA0003923591460000021
fractional loss L in formula cls Using a binary cross-entropy loss function, the regression loss L reg Using an iou loss function, λ being used to control two loss weightsWeight ratio, λ defaults to 3, L in the formula 1 Equivalently, more prior information is added, and the matching degree of high-quality prior is increased;
the second calculated loss is used to optimize the model, and its loss function is as follows:
Loss=L cls +λL reg +L conf
in the formula, the classification loss L is included cls Confidence loss L conf And regression loss L reg The classification loss and the confidence loss adopt a binary cross entropy loss function, lambda is used for controlling the regression loss weight ratio, the default is 5, meanwhile, the true label of the confidence and the classified true label are exchanged, and because the improved algorithm output branch has more branches of the rotation angle classification, the loss function in the YOLOX needs to be improved:
the improved cost function in label assignment is as follows:
Cost=L cls +λL reg +L 1 +L θ-cls
the theta-cls is used for calculating angle classification loss, and the loss of the angle is calculated in a sigmoid and binary cross entropy combined mode;
the loss function improvement during model training is as follows:
Loss=L cls +λL reg +L conf +L θ-cls
the optimized loss function is that the classification loss of the angle is added on the basis of the original loss, and the loss of the angle is calculated in the mode of combining sigmoid with binary cross entropy.
2. The method for identifying the multi-chamber lightning arrester based on the improved YOLOX as claimed in claim 1, wherein the experimental data in S1 are marked in two modes, namely rectangular frame marking and rotating rectangular frame marking.
3. The improved YOLOX-based multi-chamber arrester recognition method according to claim 2, wherein the S2 step comprises the following sub-steps:
s2-1: in order to improve the model feature learning ability, the original backbone network in YOLOX is improved by using the base unit and the down-sampling unit in the context module with larger receptive field, which is as follows:
firstly, extracting features from input features through a depth separable convolution with the size of 7 multiplied by 7, wherein the larger the convolution kernel is, the wider the learned feature region is, then using 2 common convolution operation features of 1 multiplied by 1 to raise and lower dimensions, acquiring more abstract semantic information, and finally adding and fusing the semantic information with the features of a residual mapping branch circuit;
the down-sampling operation can reduce the input characteristic resolution, simultaneously reduce the model calculation complexity, add 1 depth separable convolution of 3 × 3 in the residual mapping, and simultaneously set the step length to 2 with the depth separable convolution of 7 × 7, so as to reduce the characteristic resolution by one time and increase the number of channels by one time;
s2-2: the ConvNet module is used for improving a backbone network structure after a feature extraction network, the network input size is 640 multiplied by 3, in a first-level feature, 1 basic unit and 1 downsampling unit are used for learning shallow features, the resolution of the features is reduced by one time, and the channel dimension is increased by one time;
s2-3: firstly, using 4 groups of maximum pooling operation with different sizes for input features to obtain 4 groups of feature compression vectors, in order to reduce the parameter number of modules, simultaneously using 1 × 1 convolution operation to compress the channel dimension of each group of features by 4 times in order to ensure that the feature dimension of each group of features is consistent with the input during splicing, then using up-sampling to restore the feature resolution, combining the 4 groups of features in the channel dimension, adding and fusing the combined result and the original input features, and finally using channel disorder operation to reorder the final result in the channel dimension in order to increase the information transmissibility between the features.
4. A multi-chamber lightning conductor recognition method based on improved YOLOX according to claim 3, characterized in that the rotation angle θ in S3 is in the range 0 ° ≦ θ < 180 °.
CN202211364891.XA 2022-11-02 2022-11-02 Multi-chamber lightning arrester identification method based on improved YOLOX Pending CN115841608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211364891.XA CN115841608A (en) 2022-11-02 2022-11-02 Multi-chamber lightning arrester identification method based on improved YOLOX

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211364891.XA CN115841608A (en) 2022-11-02 2022-11-02 Multi-chamber lightning arrester identification method based on improved YOLOX

Publications (1)

Publication Number Publication Date
CN115841608A true CN115841608A (en) 2023-03-24

Family

ID=85576820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211364891.XA Pending CN115841608A (en) 2022-11-02 2022-11-02 Multi-chamber lightning arrester identification method based on improved YOLOX

Country Status (1)

Country Link
CN (1) CN115841608A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681983A (en) * 2023-06-02 2023-09-01 中国矿业大学 Long and narrow target detection method based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681983A (en) * 2023-06-02 2023-09-01 中国矿业大学 Long and narrow target detection method based on deep learning

Similar Documents

Publication Publication Date Title
CN112507793B (en) Ultra-short term photovoltaic power prediction method
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN106504233A (en) Image electric power widget recognition methodss and system are patrolled and examined based on the unmanned plane of Faster R CNN
CN112183667B (en) Insulator fault detection method in cooperation with deep learning
Su et al. RCAG-Net: Residual channelwise attention gate network for hot spot defect detection of photovoltaic farms
CN109034035A (en) Pedestrian&#39;s recognition methods again based on conspicuousness detection and Fusion Features
CN109508756B (en) Foundation cloud classification method based on multi-cue multi-mode fusion depth network
CN110516723B (en) Multi-modal foundation cloud picture identification method based on depth tensor fusion
CN105303162A (en) Target proposed algorithm-based insulator recognition algorithm for aerial images
CN112750125B (en) Glass insulator piece positioning method based on end-to-end key point detection
CN110751209A (en) Intelligent typhoon intensity determination method integrating depth image classification and retrieval
Wang et al. Railway insulator detection based on adaptive cascaded convolutional neural network
CN115689928B (en) Method and system for removing duplication of transmission tower inspection images under visible light
CN115841608A (en) Multi-chamber lightning arrester identification method based on improved YOLOX
Liu et al. Building footprint extraction from unmanned aerial vehicle images via PRU-Net: Application to change detection
CN116843636A (en) Insulator defect detection method based on improved YOLOv7 algorithm in foggy weather scene
CN113536944A (en) Distribution line inspection data identification and analysis method based on image identification
CN111931577A (en) Intelligent inspection method for specific foreign matters of power grid line
CN114580571B (en) Small sample power equipment image classification method based on migration mutual learning
CN115937492A (en) Transformer equipment infrared image identification method based on feature identification
CN115690770A (en) License plate recognition method based on space attention characteristics in non-limited scene
Chen et al. Real-time detection of UAV detection image of power line insulator bursting based on YOLOV3
Heng et al. Anti-vibration hammer detection in UAV image
CN115661117A (en) Contact net insulator visible light image detection method
CN115147591A (en) Transformer equipment infrared image voltage heating type defect diagnosis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication