CN112861670B - Transmission line hardware detection method and system - Google Patents

Transmission line hardware detection method and system Download PDF

Info

Publication number
CN112861670B
CN112861670B CN202110107618.8A CN202110107618A CN112861670B CN 112861670 B CN112861670 B CN 112861670B CN 202110107618 A CN202110107618 A CN 202110107618A CN 112861670 B CN112861670 B CN 112861670B
Authority
CN
China
Prior art keywords
occurrence
matrix
hardware
transmission line
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110107618.8A
Other languages
Chinese (zh)
Other versions
CN112861670A (en
Inventor
翟永杰
聂礼强
王乾铭
张效铭
熊剑平
赵砚青
罗旺
杨旭
赵振兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
North China Electric Power University
NARI Group Corp
Zhejiang Dahua Technology Co Ltd
Zhiyang Innovation Technology Co Ltd
Original Assignee
Shandong University
North China Electric Power University
NARI Group Corp
Zhejiang Dahua Technology Co Ltd
Zhiyang Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University, North China Electric Power University, NARI Group Corp, Zhejiang Dahua Technology Co Ltd, Zhiyang Innovation Technology Co Ltd filed Critical Shandong University
Priority to CN202110107618.8A priority Critical patent/CN112861670B/en
Publication of CN112861670A publication Critical patent/CN112861670A/en
Application granted granted Critical
Publication of CN112861670B publication Critical patent/CN112861670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for detecting a transmission line hardware fitting. The method comprises the following steps: acquiring a hardware fitting data set, and obtaining visual characteristics by adopting a Faster R-CNN algorithm according to an aerial image of the power transmission line; learning by adopting a multi-layer perceptron algorithm according to the aerial images and the visual characteristics of the power transmission line to obtain a learned co-occurrence image adjacency matrix; carrying out information transmission on the visual features according to the learned adjacent matrixes of the co-occurrence graphs to obtain enhanced features; and cascading the visual features and the enhanced features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position. By adopting the method and the system, the requirement of the traditional deep learning model on the number of the samples of each hardware in the data set can be reduced, the problems of unbalanced samples and long tail distribution of aerial data of the power transmission line are solved, and the hardware detection effect of the power transmission line is improved.

Description

Transmission line hardware detection method and system
Technical Field
The invention relates to the technical field of transmission line hardware detection, in particular to a transmission line hardware detection method and system.
Background
In recent years, with the rapid development and the comprehensive coverage of power grids, a power transmission line is used as a core system in power transmission, and the stable operation of the power transmission line has a crucial influence on the safety of the power grids. The hardware is used as an important accessory of the power transmission line, and plays roles in fixing, protecting and connecting and maintaining the stable operation of the whole line. Because the working environment of the hardware is often in a complicated and severe field environment, and the defects of corrosion, deformation, damage and the like are very easy to occur, the regular inspection of the power transmission line can greatly reduce the occurrence of the faults of the power transmission line.
With the development of digital image processing and unmanned aerial vehicle monitoring technologies, the power transmission line hardware inspection technology based on aerial image processing is successfully applied. The existing hardware detection methods can be classified into 3 types: feature description based algorithms, classical machine learning based algorithms and deep learning based algorithms.
With the popularity and development of deep learning in public data sets, hardware positioning and detection research based on a deep learning target detection algorithm is widely concerned by researchers. However, the algorithm based on deep learning only applies and improves the applicability of the target detection model according to the characteristics of the transmission line hardware, and fails to effectively fuse the model with the service knowledge in the power field. Meanwhile, due to the particularity of the hardware fitting working environment, serious sample imbalance often exists among multiple hardware fittings, and for some hardware fittings with few samples, a single depth detection model cannot accurately detect key components.
Disclosure of Invention
The invention aims to provide a method and a system for detecting hardware fittings of a power transmission line, which can reduce the requirement of a traditional deep learning model on the number of samples of each hardware fitting in a data set, relieve the problems of unbalanced samples and long tail distribution of aerial data of the power transmission line and improve the detection effect of the hardware fittings of the power transmission line.
In order to achieve the purpose, the invention provides the following scheme:
a transmission line hardware detection method comprises the following steps:
acquiring a hardware fitting data set; the hardware fitting data set comprises a plurality of aerial images of the power transmission line;
obtaining visual characteristics by adopting a Faster R-CNN algorithm according to the aerial image of the power transmission line;
learning by adopting a multilayer perceptron algorithm according to the aerial images of the power transmission line and the visual characteristics to obtain a learned co-occurrence image adjacency matrix; carrying out information transmission on the visual features according to the learned co-occurrence graph adjacency matrix to obtain enhanced features;
and cascading the visual features and the enhanced features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
Optionally, obtaining visual characteristics by using a Faster R-CNN algorithm according to the power transmission line aerial image specifically includes:
extracting multi-channel characteristics of the aerial image of the power transmission line to obtain an image characteristic diagram;
sliding an image feature map according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames;
screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas;
and dividing the target candidate region into n x n image blocks, and performing maximum pooling processing on the n x n image blocks to obtain the visual features.
Optionally, the learning is performed by using a multi-layer perceptron algorithm according to the aerial image of the power transmission line and the visual features, so as to obtain a learned co-occurrence map adjacency matrix, and the method specifically includes:
calculating the occurrence frequency of two hardware labels in pairs and the occurrence frequency of the same hardware label in each power transmission line aerial image in the hardware data set;
determining the ratio of the occurrence frequency of the two hardware labels in pairs to the occurrence frequency of the same hardware label as a co-occurrence probability, and generating a co-occurrence probability matrix according to the co-occurrence probability;
mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix;
learning by adopting a multilayer perceptron algorithm according to the visual characteristics to obtain a co-occurrence map adjacency matrix;
and learning by adopting a multi-layer perceptron algorithm by taking the co-occurrence probability mapping matrix as a true value and the visual characteristics and the co-occurrence adjacent matrix as training values to obtain a learned co-occurrence adjacent matrix.
Optionally, the performing information propagation on the visual feature according to the learned co-occurrence map adjacency matrix to obtain an enhanced feature specifically includes:
normalizing the learned adjacent matrix of the co-occurrence map to obtain a normalized adjacent matrix of the co-occurrence map;
according to the normalized co-occurrence map adjacency matrix, obtaining an enhancement feature f' by adopting the following formula:
f′=εfW e
where ε is the normalized co-occurrence map adjacency matrix, f is the visual feature, W e To transform the weight matrix.
The invention also provides a transmission line hardware fitting detection system, which comprises:
the input sub-network module is used for acquiring a hardware fitting data set; the hardware fitting data set comprises a plurality of power transmission line aerial images;
the fast R-CNN sub-network module is used for obtaining visual characteristics by adopting a fast R-CNN algorithm according to the aerial image of the power transmission line;
the graph reasoning sub-network module is used for learning by adopting a multilayer perceptron algorithm according to the aerial image of the power transmission line and the visual characteristics to obtain a learned co-occurrence graph adjacency matrix; carrying out information transmission on the visual features according to the learned co-occurrence graph adjacency matrix to obtain enhanced features;
and the result output sub-network module is used for cascading the visual features and the enhancement features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
Optionally, the Faster R-CNN sub-network module specifically includes:
the image feature map generating unit is used for extracting multi-channel features of the aerial image of the power transmission line to obtain an image feature map;
the candidate frame generating unit is used for performing image feature map sliding according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames;
the target candidate area generating unit is used for screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas;
and the visual feature generation unit is used for dividing the target candidate region into n multiplied by n image blocks and performing maximum pooling processing on the n multiplied by n image blocks to obtain the visual feature.
Optionally, the graph inference sub-network module specifically includes:
the times calculation unit is used for calculating the paired occurrence times of the two hardware labels in each electric transmission line aerial image in the hardware data set and the occurrence times of the same hardware label;
the co-occurrence probability matrix generating unit is used for determining the ratio of the paired occurrence times of the two hardware labels to the occurrence times of the same hardware label as a co-occurrence probability and generating a co-occurrence probability matrix according to the co-occurrence probability;
the co-occurrence probability mapping matrix generating unit is used for mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix;
the co-occurrence map adjacency matrix generating unit is used for learning by adopting a multi-layer perceptron algorithm according to the visual characteristics to obtain a co-occurrence map adjacency matrix;
and the learning unit is used for learning by adopting a multilayer perceptron algorithm by taking the co-occurrence probability mapping matrix as a true value and the visual features and the co-occurrence map adjacency matrix as training values to obtain a learned co-occurrence map adjacency matrix.
Optionally, the graph inference sub-network module further includes:
the normalization processing unit is used for performing normalization processing on the learned co-occurrence map adjacency matrix to obtain a normalized co-occurrence map adjacency matrix;
an enhanced feature generating unit, configured to obtain an enhanced feature f' by using the following formula according to the normalized co-occurrence map adjacency matrix:
f′=εfW e
where ε is the normalized co-occurrence graph adjacency matrix, f is the visual characteristic, W e To transform the weight matrix.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a method and a system for detecting hardware fittings of a power transmission line, which are characterized by acquiring a hardware fitting data set, and obtaining visual characteristics by adopting a Faster R-CNN algorithm according to an aerial image of the power transmission line; learning by adopting a multi-layer perceptron algorithm according to the aerial images and the visual characteristics of the power transmission line to obtain a learned co-occurrence image adjacency matrix; carrying out information transmission on the visual features according to the learned adjacent matrixes of the co-occurrence graphs to obtain enhanced features; and cascading the visual features and the enhanced features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position. The method can reduce the requirement of the traditional deep learning model on the number of samples of each hardware fitting in a data set, relieve the problems of unbalanced samples and long tail distribution of aerial data of the power transmission line, and improve the hardware fitting detection effect of the power transmission line.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flow chart of a transmission line hardware detection method in the embodiment of the invention;
FIG. 2 is a schematic diagram of a detection network model according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating data processing of an inference subnetwork in accordance with an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a system for detecting hardware fittings of a power transmission line, which can reduce the requirement of a traditional deep learning model on the number of samples of each hardware fitting in a data set, relieve the problems of unbalanced samples and long tail distribution of aerial data of the power transmission line and improve the detection effect of the hardware fittings of the power transmission line.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Examples
Fig. 1 is a flowchart of a method for detecting transmission line hardware in an embodiment of the present invention, and as shown in fig. 1, a method for detecting transmission line hardware includes:
step 101: acquiring a hardware fitting data set; the hardware data set comprises a plurality of aerial images of the power transmission line.
Step 102: and obtaining visual characteristics by adopting a Faster R-CNN algorithm according to the aerial image of the power transmission line.
Step 102, specifically comprising:
extracting multi-channel characteristics of the aerial image of the power transmission line to obtain an image characteristic diagram;
sliding an image feature map according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames;
screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas;
and dividing the target candidate region into n x n image blocks, and performing maximum pooling processing on the n x n image blocks to obtain the visual features.
Step 103: learning by adopting a multi-layer perceptron algorithm according to the aerial images and the visual characteristics of the power transmission line to obtain a learned co-occurrence image adjacency matrix; and carrying out information transmission on the visual characteristics according to the learned adjacent matrixes of the co-occurrence graphs to obtain enhanced characteristics.
Step 103, specifically comprising:
calculating the number of times that two hardware labels in each power transmission line aerial image in the hardware data set appear in pairs and the number of times that the same hardware label appears;
determining the ratio of the occurrence frequency of the two hardware fittings labels in pairs to the occurrence frequency of the same hardware fittings label as a co-occurrence probability, and generating a co-occurrence probability matrix according to the co-occurrence probability;
mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix;
learning by adopting a multi-layer perceptron algorithm according to the visual characteristics to obtain a co-occurrence map adjacency matrix;
and taking the co-occurrence probability mapping matrix as a true value, taking the visual characteristics and the co-occurrence map adjacency matrix as training values, and learning by adopting a multi-layer perceptron algorithm to obtain a learned co-occurrence map adjacency matrix.
Normalizing the learned adjacent matrix of the co-occurrence map to obtain a normalized adjacent matrix of the co-occurrence map;
according to the normalized co-occurrence map adjacency matrix, obtaining an enhancement feature f' by adopting the following formula:
f′=εfW e
where ε is the normalized co-occurrence map adjacency matrix, f is the visual feature, W e To transform the weight matrix.
Step 104: and cascading the visual features and the enhancement features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
The invention also provides a transmission line hardware fitting detection system, which comprises:
the input sub-network module is used for acquiring a hardware fitting data set; the hardware data set comprises a plurality of aerial images of the power transmission line.
And the Faster R-CNN sub-network module is used for obtaining visual characteristics by adopting a Faster R-CNN algorithm according to the aerial image of the power transmission line.
The fast R-CNN sub-network module specifically comprises:
the image characteristic diagram generating unit is used for extracting multi-channel characteristics of the aerial image of the power transmission line to obtain an image characteristic diagram;
the candidate frame generating unit is used for performing image characteristic map sliding according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames;
the target candidate area generating unit is used for screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas;
and the visual feature generation unit is used for dividing the target candidate region into n multiplied by n image blocks and performing maximum pooling processing on the n multiplied by n image blocks to obtain the visual feature.
The graph reasoning sub-network module is used for learning by adopting a multi-layer perceptron algorithm according to the aerial images and the visual characteristics of the power transmission line to obtain a learned co-occurrence graph adjacency matrix; and carrying out information transmission on the visual features according to the learned adjacent matrixes of the co-occurrence graphs to obtain enhanced features.
The graph inference sub-network module specifically comprises:
the times calculation unit is used for calculating the paired occurrence times of two hardware labels in each power transmission line aerial image in the hardware data set and the occurrence times of the same hardware label;
the co-occurrence probability matrix generating unit is used for determining the ratio of the paired occurrence times of the two hardware labels to the occurrence times of the same hardware label as the co-occurrence probability and generating a co-occurrence probability matrix according to the co-occurrence probability;
the co-occurrence probability mapping matrix generating unit is used for mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix;
the co-occurrence map adjacency matrix generating unit is used for learning by adopting a multi-layer perceptron algorithm according to the visual characteristics to obtain a co-occurrence map adjacency matrix;
and the learning unit is used for learning by adopting a multilayer perceptron algorithm by taking the co-occurrence probability mapping matrix as a true value and the visual feature and the co-occurrence map adjacency matrix as training values to obtain a learned co-occurrence map adjacency matrix.
The normalization processing unit is used for performing normalization processing on the learned co-occurrence map adjacency matrix to obtain a normalized co-occurrence map adjacency matrix;
an enhanced feature generating unit, configured to obtain an enhanced feature f' by using the following formula according to the normalized co-occurrence map adjacency matrix:
f′=εfW e
where ε is the normalized co-occurrence map adjacency matrix, f is the visual feature, W e To transform the weight matrix.
And the result output sub-network module is used for cascading the visual features and the enhancement features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
In order to further explain the method for detecting the transmission line hardware provided by the invention, as shown in fig. 2-3,
the detection network model mainly comprises 4 parts, namely an input sub-network, a Faster R-CNN sub-network, a graph reasoning sub-network and a result output sub-network.
The input sub-network mainly comprises two parts, namely image input and data set construction of aerial images of the power transmission line. Firstly, an armour clamp data set is constructed by aerial shooting images of the existing power transmission line. Then, in the training stage of the detection model, the pictures in the hardware data set are used for training and adjusting the model parameters. And in the test stage of detecting the model, carrying out model detection by using the transmission line hardware fitting pictures acquired on site.
The input sub-network inputs the aerial images into the Faster R-CNN sub-network for model training and testing, and inputs the hardware data set into the graph reasoning sub-network for prior knowledge extraction.
The Faster R-CNN sub-network essentially comprises the following 3 steps:
1. a convolutional neural network: and extracting multi-channel features of the input image from light to deep by using a residual error network ResNet101 and forming an image feature map.
RPN network: performing feature map sliding through multiple anchor frames with preset size and proportion to generate multiple candidate frames, analyzing and screening N by adopting Non-Maximum Suppression (NMS) r A target candidate region.
3.RoI pooling:uniformly dividing each target candidate region into n multiplied by n image blocks and carrying out maximum pooling calculation to obtain a feature map candidate region vector with a fixed scale
Figure BDA0002918156110000081
Wherein N is r And extracting the number of candidate targets for a Faster R-CNN algorithm, wherein D is the characteristic dimension of each candidate target region.
The Faster R-CNN sub-network exports the visual features f to the graph inference sub-network and the result exporting sub-network.
The graph inference subnetwork mainly comprises the following 4 steps, as shown in fig. 3:
1. co-occurrence probability matrix: the invention adopts a conditional probability model to express the co-occurrence probability, firstly, the times of the paired occurrence of the metal labels in each image in a training set are calculated, and a statistical matrix of the co-occurrence times is obtained
Figure BDA0002918156110000082
Wherein C represents the number of hardware classes, H xy Indicating label L x And a label L y Number of occurrences in the same image, H xx Namely, the numerical value of the diagonal line element represents the times of the hardware variety in the training set image. Then, dividing each element in the H by the diagonal element of the row through row normalization to obtain a co-occurrence probability matrix
Figure BDA0002918156110000083
As shown in formula (1):
P xy =H xy /H xx (1)
wherein P is xy =P(L y |L x ) Indicates when the label L x Tag L at the time of occurrence y The probability of (c).
2. Co-occurrence probability mapping matrix: definition of
Figure BDA0002918156110000084
Wherein
Figure BDA0002918156110000085
Representing the co-occurrence probability association between the ith node and the jth node in the co-occurrence graph,
Figure BDA0002918156110000086
representing a co-occurrence probability mapping matrix. N for fast R-CNN sub-network export r The candidate region vectors map the co-occurrence probability of the real class according to the co-occurrence probability matrix P to obtain a co-occurrence probability mapping matrix
Figure BDA0002918156110000087
3. Co-occurrence graph adjacency matrix: definition of
Figure BDA0002918156110000088
Wherein
Figure BDA0002918156110000089
Representing the association relationship between the ith input vector and the jth input vector in the co-occurrence graph,
Figure BDA0002918156110000091
a co-occurrence adjacency matrix representing model learning. The adjacency is learned by a Multi-layer Perceptron (MLP) as shown in the following equation:
Figure BDA0002918156110000092
where MLP denotes the matrix parameters learned by MLP, and α (-) denotes the matrix parameters for the input visual features (f) i ,f j ) Carry out L 1 And (4) a result of the paradigm calculation. The MLP is expressed as a process in which a vector is transformed by four steps of an input layer, a hidden layer, a nonlinear activation layer and an output layer, i.e., Y = ReLU (XW + B), where X is an input vector, Y is an output vector, W and B are weights and offsets of the MLP hidden layer, and ReLU is a nonlinear transformation function. L is 1 The paradigm is represented as a difference calculation between vectors.
In the training phase, training parameters of MLPThe number needs to be trained, and the invention maps the matrix by the co-occurrence probability
Figure BDA0002918156110000093
As a true value, a co-occurrence map adjacency matrix learned by visual feature f and MLP
Figure BDA0002918156110000094
As a training value, the MLP is parameter-updated. The loss function during the training phase is shown as follows:
Figure BDA0002918156110000095
normalizing the learned co-occurrence graph adjacency matrix as shown in equation 4:
Figure BDA0002918156110000096
4. graph reasoning: and carrying out information propagation on the visual features of the candidate region in a weighting mode to obtain an enhanced feature f', as shown in formula (5):
f′=εfW e (5)
where ε is the normalized co-occurrence graph adjacency matrix, f is the input visual features,
Figure BDA0002918156110000097
in order to transform the weight matrix,
Figure BDA0002918156110000098
is the enhanced feature obtained by graph reasoning, and E is the enhanced feature dimension.
The graph inference sub-network outputs the enhancement feature f' to the result output sub-network.
The result output sub-network mainly comprises the following steps:
characteristic cascading: and cascading the original visual feature f and the enhanced feature f' to obtain a joint feature of the fusion co-occurrence reasoning module.
And inputting the joint feature vector into the full-connection layer, calculating the category and the accurate position of the candidate region vector of the feature map, and completing a hardware target detection task.
According to the detection method, the co-occurrence matrix is used as the regular expression of the hardware fitting assembly structure, and the co-occurrence reasoning module is designed to be embedded into the target detection model, so that the organic fusion of the deep learning model and the service knowledge in the electric power field is effectively promoted, and the detection effect of the transmission line hardware fitting is improved.
According to the detection method, the fixed structure relation of the electric transmission line hardware fittings is introduced to serve as prior guidance, the requirement of a traditional deep learning model on the number of samples of each hardware fitting in a data set is lowered, and the problems of unbalanced samples and long tail distribution of aerial data of the electric transmission line are effectively solved.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In summary, this summary should not be construed to limit the present invention.

Claims (4)

1. A transmission line hardware detection method is characterized by comprising the following steps:
acquiring a hardware fitting data set; the hardware fitting data set comprises a plurality of aerial images of the power transmission line;
obtaining visual characteristics by adopting a Faster R-CNN algorithm according to the aerial image of the power transmission line; the method for obtaining the visual characteristics by adopting the Faster R-CNN algorithm according to the aerial image of the power transmission line specifically comprises the following steps: extracting multi-channel characteristics of the aerial image of the power transmission line to obtain an image characteristic diagram; sliding an image feature map according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames; screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas; dividing the target candidate region into n x n image blocks, and performing maximum pooling processing on the n x n image blocks to obtain visual features; the method comprises the following steps of extracting multi-channel features of the aerial image of the power transmission line to obtain an image feature map, and specifically comprises the following steps: extracting multi-channel features of an input image from shallow to deep by using a residual error network ResNet101 and forming an image feature map;
learning by adopting a multi-layer perceptron algorithm according to the aerial image of the power transmission line and the visual characteristics to obtain a learned co-occurrence image adjacency matrix; carrying out information transmission on the visual features according to the learned co-occurrence graph adjacency matrix to obtain enhanced features; according to the aerial image of the power transmission line and the visual characteristics, learning is carried out by adopting a multilayer perceptron algorithm, and a learned co-occurrence image adjacency matrix is obtained, and the method specifically comprises the following steps: calculating the occurrence frequency of two hardware labels in pairs and the occurrence frequency of the same hardware label in each power transmission line aerial image in the hardware data set; determining the ratio of the occurrence frequency of the two hardware labels in pairs to the occurrence frequency of the same hardware label as a co-occurrence probability, and generating a co-occurrence probability matrix according to the co-occurrence probability; mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix; according to the visual features, learning by adopting a multi-layer perceptron algorithm to obtain a co-occurrence graph adjacency matrix; learning by adopting a multi-layer perceptron algorithm by taking the co-occurrence probability mapping matrix as a true value and the visual features and the co-occurrence adjacent matrix as training values to obtain a learned co-occurrence adjacent matrix; wherein, the co-occurrence probability matrix: expressing the co-occurrence probability by adopting a conditional probability model, firstly calculating the number of pairwise occurrence of the metal labels in each image in a training set to obtain a co-occurrence number statistical matrix
Figure FDF0000019107480000011
Wherein C represents the number of hardware classes, H xy Indicating label L x And a label L y Number of occurrences in the same image, H xx The numerical value of the diagonal line element represents the times of the hardware variety appearing in the training set image; then dividing each element in H by the diagonal element of the line through line normalization to obtain a co-occurrence summaryRate matrix
Figure FDF0000019107480000012
As shown in formula P xy =H xy /H xx Shown; wherein P is xy =P(L y |L x ) Indicates when the label L x Tag L at the time of occurrence y The probability of (d); co-occurrence probability mapping matrix: definition of
Figure FDF0000019107480000013
Wherein
Figure FDF0000019107480000014
Representing the co-occurrence probability association between the ith node and the jth node in the co-occurrence graph,
Figure FDF0000019107480000015
representing a co-occurrence probability mapping matrix; n for fast R-CNN sub-network export r Candidate region vectors, for the real category of each region vector, mapping the co-occurrence probability of the real category according to the co-occurrence probability matrix P to obtain a co-occurrence probability mapping matrix
Figure FDF0000019107480000021
And cascading the visual features and the enhanced features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
2. The method according to claim 1, wherein the performing information propagation on the visual features according to the learned co-occurrence map adjacency matrix to obtain enhanced features specifically comprises:
normalizing the learned adjacent matrix of the co-occurrence graph to obtain a normalized adjacent matrix of the co-occurrence graph;
according to the normalized co-occurrence map adjacency matrix, obtaining an enhancement feature f' by adopting the following formula:
f′=εfW e
where ε is the normalized co-occurrence map adjacency matrix, f is the visual feature, W e To transform the weight matrix.
3. The utility model provides a transmission line gold utensil detecting system which characterized in that includes:
the input sub-network module is used for acquiring a hardware fitting data set; the hardware fitting data set comprises a plurality of power transmission line aerial images;
the fast R-CNN sub-network module is used for obtaining visual characteristics by adopting a fast R-CNN algorithm according to the aerial image of the power transmission line; the Faster R-CNN sub-network module specifically comprises: the image feature map generating unit is used for extracting multi-channel features of the aerial image of the power transmission line to obtain an image feature map; the candidate frame generating unit is used for performing image feature map sliding according to a plurality of anchor frames with preset sizes and proportions to generate a plurality of candidate frames; the target candidate area generating unit is used for screening the candidate frames by adopting a non-maximum suppression algorithm to obtain a plurality of target candidate areas; the visual feature generation unit is used for dividing the target candidate region into n multiplied by n image blocks and performing maximum pooling processing on the n multiplied by n image blocks to obtain visual features; the method comprises the following steps of extracting multi-channel features of the aerial image of the power transmission line to obtain an image feature map, and specifically comprises the following steps: extracting multi-channel features of an input image from shallow to deep by using a residual error network ResNet101 and forming an image feature map;
the graph reasoning sub-network module is used for learning by adopting a multilayer perceptron algorithm according to the aerial image of the power transmission line and the visual characteristics to obtain a learned co-occurrence graph adjacency matrix; carrying out information transmission on the visual features according to the learned co-occurrence graph adjacency matrix to obtain enhanced features; the graph reasoning sub-network module specifically comprises: the times calculating unit is used for calculating the paired occurrence times of two hardware fittings labels and the occurrence times of the same hardware fittings label in each power transmission line aerial image in the hardware fittings data set; a co-occurrence probability matrix generation unit for generating the number of occurrences of the two hardware labels in pairsDetermining the ratio of the number of times of occurrence of the hardware label of the same type as the co-occurrence probability, and generating a co-occurrence probability matrix according to the co-occurrence probability; the co-occurrence probability mapping matrix generating unit is used for mapping the co-occurrence probability matrix to the co-occurrence probability corresponding to the actual hardware fitting category to obtain a co-occurrence probability mapping matrix; the co-occurrence map adjacency matrix generating unit is used for learning by adopting a multi-layer perceptron algorithm according to the visual characteristics to obtain a co-occurrence map adjacency matrix; the learning unit is used for learning by adopting a multilayer perceptron algorithm by taking the co-occurrence probability mapping matrix as a true value and the visual features and the co-occurrence map adjacency matrix as training values to obtain a learned co-occurrence map adjacency matrix; wherein, the co-occurrence probability matrix: expressing the co-occurrence probability by adopting a conditional probability model, firstly calculating the times of pairwise occurrence of the metal labels in each image in a training set to obtain a co-occurrence time statistical matrix
Figure FDF0000019107480000031
Wherein C represents the number of hardware classes, H xy Indicating label L x And a label L y Number of occurrences in the same image, H xx The numerical value of the diagonal line element represents the times of the hardware variety appearing in the training set image; then, dividing each element in the H by the diagonal element of the row through row normalization to obtain a co-occurrence probability matrix
Figure FDF0000019107480000032
As shown in formula P xy =H xy /H xx Shown; wherein P is xy =P(L y |L x ) Indicates when the label L x Tag L at the time of occurrence y The probability of (d); co-occurrence probability mapping matrix: definition of
Figure FDF0000019107480000033
Wherein
Figure FDF0000019107480000034
Representing the co-occurrence probability association between the ith node and the jth node in the co-occurrence graph,
Figure FDF0000019107480000035
representing a co-occurrence probability mapping matrix; n for fast R-CNN sub-network export r The candidate region vectors map the co-occurrence probability of the real class according to the co-occurrence probability matrix P to obtain a co-occurrence probability mapping matrix
Figure FDF0000019107480000036
And the result output sub-network module is used for cascading the visual features and the enhancement features to obtain fusion features, and carrying out full-connection processing on the fusion features to obtain the hardware type and the hardware position.
4. The transmission line hardware detection system of claim 3, wherein the graph inference sub-network module further comprises:
the normalization processing unit is used for performing normalization processing on the learned co-occurrence map adjacency matrix to obtain a normalized co-occurrence map adjacency matrix;
an enhanced feature generating unit, configured to obtain an enhanced feature f' by using the following formula according to the normalized co-occurrence map adjacency matrix:
f′=εfW e
where ε is the normalized co-occurrence map adjacency matrix, f is the visual feature, W e To transform the weight matrix.
CN202110107618.8A 2021-01-27 2021-01-27 Transmission line hardware detection method and system Active CN112861670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110107618.8A CN112861670B (en) 2021-01-27 2021-01-27 Transmission line hardware detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110107618.8A CN112861670B (en) 2021-01-27 2021-01-27 Transmission line hardware detection method and system

Publications (2)

Publication Number Publication Date
CN112861670A CN112861670A (en) 2021-05-28
CN112861670B true CN112861670B (en) 2022-11-08

Family

ID=76009333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110107618.8A Active CN112861670B (en) 2021-01-27 2021-01-27 Transmission line hardware detection method and system

Country Status (1)

Country Link
CN (1) CN112861670B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780358A (en) * 2021-08-16 2021-12-10 华北电力大学(保定) Real-time hardware fitting detection method based on anchor-free network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133661A1 (en) * 2009-05-20 2010-11-25 Tessera Technologies Ireland Limited Identifying facial expressions in acquired digital images
CN109344753A (en) * 2018-09-21 2019-02-15 福州大学 A kind of tiny fitting recognition methods of Aerial Images transmission line of electricity based on deep learning
CN109344285A (en) * 2018-09-11 2019-02-15 武汉魅瞳科技有限公司 A kind of video map construction and method for digging, equipment towards monitoring
CN110175733A (en) * 2019-04-01 2019-08-27 阿里巴巴集团控股有限公司 A kind of public opinion information processing method and server
CN111276240A (en) * 2019-12-30 2020-06-12 广州西思数字科技有限公司 Multi-label multi-mode holographic pulse condition identification method based on graph convolution network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6934675B2 (en) * 2001-06-14 2005-08-23 Stephen C. Glinski Methods and systems for enabling speech-based internet searches
US7480617B2 (en) * 2004-09-21 2009-01-20 International Business Machines Corporation Method for likelihood computation in multi-stream HMM based speech recognition
JP5463873B2 (en) * 2009-11-20 2014-04-09 株式会社デンソーアイティーラボラトリ Multimedia classification system and multimedia search system
CN110162644B (en) * 2018-10-10 2022-12-20 腾讯科技(深圳)有限公司 Image set establishing method, device and storage medium
CN111507812B (en) * 2020-07-02 2020-10-27 成都晓多科技有限公司 Commodity collocation recommendation method and device based on attributes and titles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133661A1 (en) * 2009-05-20 2010-11-25 Tessera Technologies Ireland Limited Identifying facial expressions in acquired digital images
CN109344285A (en) * 2018-09-11 2019-02-15 武汉魅瞳科技有限公司 A kind of video map construction and method for digging, equipment towards monitoring
CN109344753A (en) * 2018-09-21 2019-02-15 福州大学 A kind of tiny fitting recognition methods of Aerial Images transmission line of electricity based on deep learning
CN110175733A (en) * 2019-04-01 2019-08-27 阿里巴巴集团控股有限公司 A kind of public opinion information processing method and server
CN111276240A (en) * 2019-12-30 2020-06-12 广州西思数字科技有限公司 Multi-label multi-mode holographic pulse condition identification method based on graph convolution network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hybrid Knowledge Routed Modules for Large-scale Object Detection;Chenhan Jiang 等;《arXiv》;20181030;第1-12页第3节,图2-3 *
Object Categorization using Co-Occurrence, Location and Appearance;Carolina Galleguillos 等;《2008 IEEE Conference on Computer Vision and Pattern Recognition》;20080628;第1-8页 *

Also Published As

Publication number Publication date
CN112861670A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
AU2020103905A4 (en) Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN110705601A (en) Transformer substation equipment oil leakage image identification method based on single-stage target detection
CN111797890A (en) Method and system for detecting defects of power transmission line equipment
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN112200178A (en) Transformer substation insulator infrared image detection method based on artificial intelligence
CN114092793B (en) End-to-end biological target detection method suitable for complex underwater environment
CN114170478A (en) Defect detection and positioning method and system based on cross-image local feature alignment
CN113469950A (en) Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN112861670B (en) Transmission line hardware detection method and system
CN115546608A (en) Unmanned aerial vehicle data link electromagnetic interference classification and threat assessment method
CN115859702A (en) Permanent magnet synchronous wind driven generator demagnetization fault diagnosis method and system based on convolutional neural network
Yeh et al. Using convolutional neural network for vibration fault diagnosis monitoring in machinery
CN112419243B (en) Power distribution room equipment fault identification method based on infrared image analysis
CN113326873A (en) Method for automatically classifying opening and closing states of power equipment based on data enhancement
CN117516937A (en) Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement
CN113052103A (en) Electrical equipment defect detection method and device based on neural network
CN116912483A (en) Target detection method, electronic device and storage medium
CN115962428A (en) Real-time online intelligent interpretability monitoring and tracing method for gas pipe network leakage
CN115457365A (en) Model interpretation method and device, electronic equipment and storage medium
CN112508946B (en) Cable tunnel anomaly detection method based on antagonistic neural network
CN115578325A (en) Image anomaly detection method based on channel attention registration network
CN118228772B (en) Distance measurement method, framework, system and medium for actually measured traveling wave of power transmission line
CN112700425B (en) Determination method for X-ray image quality of power equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhai Yongjie

Inventor after: Nie Liqiang

Inventor after: Wang Qianming

Inventor after: Zhang Xiaoming

Inventor after: Xiong Jianping

Inventor after: Zhao Yanqing

Inventor after: Luo Wang

Inventor after: Yang Xu

Inventor after: Zhao Zhenbing

Inventor before: Zhai Yongjie

Inventor before: Yang Xu

Inventor before: Wang Qianming

Inventor before: Zhang Xiaoming

Inventor before: Zhao Zhenbing

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220315

Address after: 071003 North China Electric Power University No.1 campus, 619 Yonghua North Street, Baoding City, Hebei Province

Applicant after: NORTH CHINA ELECTRIC POWER University (BAODING)

Applicant after: SHANDONG University

Applicant after: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

Applicant after: Zhiyang Innovation Technology Co.,Ltd.

Applicant after: NARI Group Corp.

Address before: 071003 North China Electric Power University No.1 campus, 619 Yonghua North Street, Baoding City, Hebei Province

Applicant before: NORTH CHINA ELECTRIC POWER University (BAODING)

GR01 Patent grant
GR01 Patent grant