CN112257547A - Transformer substation safety measure identification method based on deep learning - Google Patents
Transformer substation safety measure identification method based on deep learning Download PDFInfo
- Publication number
- CN112257547A CN112257547A CN202011117989.6A CN202011117989A CN112257547A CN 112257547 A CN112257547 A CN 112257547A CN 202011117989 A CN202011117989 A CN 202011117989A CN 112257547 A CN112257547 A CN 112257547A
- Authority
- CN
- China
- Prior art keywords
- layer
- convolution
- image
- output
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 230000004888 barrier function Effects 0.000 claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000003709 image segmentation Methods 0.000 claims abstract description 5
- 210000002569 neuron Anatomy 0.000 claims description 42
- 238000011176 pooling Methods 0.000 claims description 36
- 238000005070 sampling Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 20
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 239000003086 colorant Substances 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 claims description 2
- 238000012423 maintenance Methods 0.000 description 9
- 238000007689 inspection Methods 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/06—Electricity, gas or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a transformer substation safety measure identification method based on deep learning, which mainly comprises the following steps: s100, collecting and inputting safety measure images of a working site, and preprocessing the images; s200, carrying out image segmentation on the original safety measure image by utilizing a U-net convolution network, segmenting pixel blocks of upper, middle and lower cabinet doors, nameplates, interval names and obscurations of the switch cabinet, and respectively determining corresponding positions of the pixel blocks; step S300, utilizing a Lenet-5 convolution network to identify and classify nameplates and interval names; s400, determining the types and the number of nameplates on upper, middle and lower cabinet doors and barriers of the switch cabinet; step S500, converting the safety measure image into a safety measure text and finishing output; and step S600, comparing with the safety measures in the work ticket to check the correctness of the safety measures. The method has the advantages of high identification accuracy, strong practicability and the like.
Description
Technical Field
The invention relates to a transformer substation safety measure identification method, in particular to a transformer substation safety measure identification method based on deep learning, and belongs to the technical field of operation and maintenance of power systems.
Background
The maintenance and repair work of the electrical equipment can be carried out regularly, so that the safe and stable operation of the power system can be effectively ensured. At electrical equipment maintenance job site, for guaranteeing effective isolation between work area and the electrified operation region, need arrange corresponding safeguard measure between work area and electrified operation region, block and hang and establish the tablet including installing to avoid the staff mistake to go into electrified region, mistake to bump electrified equipment. Meanwhile, the operation personnel can be indicated to carry out maintenance work in a safe region through the obvious work region sign board, and the maintenance work is ensured to be carried out safely and orderly. At present, the commonly used nameplates in the safety measures of the transformer substation are as follows: "stop step, high pressure hazard! "," forbids closing the switch, someone works! "," forbids closing the switch, the line is operated by people! "," no climb, high pressure hazard! "," work here "," up and down from here "," in and out from here ", and the like.
In the actual working process, due to various reasons, the nameplates are often arranged incorrectly, arranged irregularly, and easily shielded, and the corresponding working specifications and actual field requirements cannot be met. Especially in outdoor equipment maintenance work, because reasons such as strong wind and staff are too much, the relatively easy emergence marks the tablet and is blown down, falls or is played the scheduling problem by the staff, marks the effect that the tablet can't play safety warning, causes the staff probably because of can't see mark the tablet and the mistake is gone into, the mistake is stepped on electrified interval and is taken place the personal electric shock accident, has great potential safety hazard to the safety in production. In order to ensure the correct and standard arrangement of various nameplates in the maintenance work, the invention provides a transformer substation safety measure identification method based on deep learning.
Disclosure of Invention
The invention mainly solves the defects in the prior art, and provides a transformer substation safety measure identification method based on deep learning, which solves the problems of wrong arrangement, irregular arrangement, easy shielding and the like of safety measures in the current transformer substation overhauling process.
The technical problem of the invention is mainly solved by the following technical scheme:
a transformer substation safety measure identification method based on deep learning is carried out according to the following steps:
s100, collecting and inputting safety measure images of a working site, and simultaneously carrying out image preprocessing;
s200, carrying out image segmentation on the preprocessed safety measure image by utilizing a U-net convolution network, segmenting pixel blocks of an upper cabinet door, a middle cabinet door, a lower cabinet door, a nameplate, an interval name and a barrier of the switch cabinet, and respectively determining the corresponding positions of the pixel blocks;
step S300, utilizing a Lenet-5 convolution network to identify and classify nameplates and interval names;
s400, determining the types and the number of nameplates on an upper cabinet door, a middle cabinet door, a lower cabinet door and a barrier of the switch cabinet;
step S500, converting the safety measure image into a safety measure text and finishing output;
and step S600, comparing with the safety measures in the work ticket to check the correctness of the safety measures.
Preferably, the step S100 specifically includes: carrying out image acquisition on the safety measures of the working site, and then inputting the acquired safety measure images into an intelligent identification system;
because the input images may have the related problems of non-uniform size, too high pixels and too large noise, the training process of the model and the segmentation performance of the U-net convolution network are influenced; firstly, preprocessing an input image, enhancing the contrast of an original safety measure image and eliminating noise by Gaussian filtering and edge detection related technologies, and uniformly cutting the input image into an image with the size of 512 × 512 pixels.
Preferably, the step S200 specifically includes: the U-net convolutional network structure comprises a convolutional layer, a feature fusion layer, a down-sampling layer and an up-sampling layer, and has strong feature extraction capability; the left coding part of the U-net convolutional network structure adopts a convolutional layer and a downsampling layer to obtain high-dimensional characteristics of an image, and the right decoding part adopts an upsampling layer and a convolutional layer to recover high-level abstract characteristics of a characteristic diagram lost due to upsampling;
extracting characteristic information from a security measure image with the size of 512 pixels by adopting an encoder and a decoder; the left coding part of the U-net convolution network structure comprises 4 convolution layers and 4 down-sampling layers, each convolution layer comprises two convolution operations, the down-sampling layers adopt maximum pooling operation, and the sizes of the 4 down-sampled images are respectively 256 × 256, 128 × 128, 64 × 64 and 32 × 32; after the 512-dimensional high-dimensional features are obtained, the network also performs convolution operation twice to obtain 1024-dimensional features; the right decoding part comprises 4 upsampling layers and 4 convolutional layers, and the feature graph obtained after each upsampling is fused and spliced with the feature graph with the same scale of the network part for feature extraction; the first 3 convolutional layers in the 4 convolutional layers respectively comprise two convolution operations, the last 1 convolutional layer comprises 3 convolution operations for restoring the feature map to the size of the original image, a target region segmentation map corresponding to a specific region in the original image is output, and the size of the output image is 512 x 512 pixels;
wherein convolution and upsampling employ Relu activation functions:
and (3) carrying out multiple segmentation on the preprocessed safety measure image by utilizing a U-net convolution network to finally obtain region segmentation maps of upper, middle and lower cabinet doors, nameplates, interval names and obscurations of the switch cabinet, wherein different regions are represented by pixel blocks with different colors, and the corresponding positions of the pixel blocks are obtained at the same time.
Preferably, the step S300 specifically includes: identifying and classifying the contents of the segmented nameplate images and the interval name images by adopting a Lenet-5 convolutional network; the Lenet-5 convolutional network comprises an Input layer (Input layer), a convolutional layer (C1 layer), a pooling layer (S2 layer), a convolutional layer (C3 layer), a pooling layer (S4 layer), a convolutional layer (C5 layer), a full-link layer (F6 layer) and an Output layer (Output layer);
the Input layer (Input layer) uniformly normalizes the Input image size to 32 × 32;
the input image in the convolution layer (C1 layer) is 32 × 32, the convolution kernel size is 5 × 5, the convolution kernel type is 6, and the SAME filling mode adopted in the convolution operation outputs 6 characteristic images of 28 × 28;
the input images in the pooling layer (S2 layer) are 28 × 28, the sampling area is 2 × 2, the sampling type is 6, the sampling mode is that 4 input images are added, multiplied by training parameters and added with training bias, and output through a Sigmoid function, and 6 characteristic images of 14 × 14 are output;
the input images in the convolution layer (C3 layer) are 14 × 14, the convolution kernel size is 5 × 5, the convolution kernel type is 16, and the SAME filling mode adopted in the convolution operation outputs 16 characteristic images of 10 × 10;
the input images in the pooling layer (S4 layer) are 10 x 10, the sampling area is 2 x 2, the sampling type is 16, the sampling mode is that 4 input images are added, multiplied by training parameters and added with training bias, and then the training bias is output through a Sigmoid function, and 16 characteristic images of 5 x 5 are output;
the input image in the convolution layer (C5 layer) is 5 × 5, the convolution kernel size is 5 × 5, the convolution kernel type is 120, and the SAME filling mode adopted in the convolution operation is output as a 120-dimensional vector;
the input of the full connection layer (F6 layer) is 120-dimensional vector, the dot product between the input vector and the weight vector is calculated, and the result is output through sigmoid function after the offset is added; the output is 84-dimensional vector;
the Output layer (Output layer) has n neurons, which respectively represent n numbers from 0 to n-1 and respectively correspond to different outputs;
the convolution layer is used for extracting image characteristic data, sliding in an image by utilizing a convolution kernel and performing convolution with image local data to generate a characteristic graph; when the convolution kernel traverses the input image, the calculation mode is expressed as:
in the formula (2), xl jIs the value of the jth neuron in the l layer; pjA convolution receptive field region for the jth neuron; x is the number ofl -1 iIs the value of the ith neuron in layer l-1; k is a radical ofl jThe value of the j-th neuron convolution kernel in the l-th layer; bl jIs the bias value of the jth neuron in the ith layer; (x) is an activation function;
the pooling layer is used for aggregating the feature data and reducing the dimensionality of the feature data; the pooling layer is used for carrying out dimension reduction processing on a plurality of characteristic planes generated after the convolution layer, so that the number of training parameters is reduced, and the operation efficiency is improved; the pooling method includes maximum pooling and mean pooling, and the calculation can be expressed as:
in the formula (3), xl jIs the value of the jth neuron in the l layer; x is the number ofl-1 iIs the value of the ith neuron in layer l-1; pjA convolution receptive field region for the jth neuron; bl jIs the bias value of the jth neuron in the ith layer; pool (x) is a sampling function;
the output layers are connected by adopting a Radial Basis Function (RBF) network; the RBF is calculated in the following way:
in the formula (4), yiIs the ith neuron of the output layer; x is the number ofjIs the jth neuron in the fully-connected layer; omegaijThe weight value between the jth neuron of the full connection layer and the ith neuron of the output layer is obtained;
inputting the segmented nameplate and interval name images into a Lenet-5 convolutional network, passing through a plurality of convolutional layers and pooling layers, and inputting the images into a full connection layer; and the full connection layer combines a plurality of groups of data characteristics after the pooling layer into a group of data to be output, and identifies the type and the interval name of the nameplate.
Preferably, the step S400 specifically includes: combining the positions of the pixel blocks in the areas obtained in the step S200 and the interval names and the nameplate types identified in the step S300 to obtain the types and the number of the nameplates on the upper, middle and lower cabinet doors and the barrier of the interval cabinet; specifically, pixel blocks containing the nameplate 1 and the nameplate 2 in the pixel block range of the cabinet door on the switch cabinet are obtained through the pixel block positions, and meanwhile, the types of the nameplate 1 and the nameplate 2 are identified in the step S300, so that the types and the number of the nameplates of the cabinet door on the switch cabinet are obtained.
Preferably, the step S500 specifically includes: and (5) through the types and the number of the nameplates on the upper cabinet door, the middle cabinet door, the lower cabinet door and the barrier of the switch cabinet obtained in the step (S400), generating texts for arranging safety measures by utilizing a uniform safety measure arrangement template, and finishing output.
Preferably, the step S600 specifically includes: comparing the converted and output safety measure text with the safety measure text in the work ticket so as to check the correctness of the safety measure; and if the field safety measures are incorrect, rearranging the safety measures.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) high efficiency. The invention can rapidly and intelligently realize the inspection of the safety measures of the overhaul working site through the image segmentation, identification and classification algorithm, thereby greatly improving the inspection efficiency of the safety measures of the working site.
(2) And (4) accuracy. According to the transformer substation safety measure identification method, the convolutional neural network can be used for accurately segmenting the images of the upper cabinet door, the middle cabinet door, the lower cabinet door, the nameplate, the space name and the barrier in the safety measure image, and accurately identifying and classifying the nameplate and the space name.
(3) Convenience. According to the transformer substation safety measure identification method, the detection of the arrangement situation of the on-site safety measures can be automatically realized only by acquiring and inputting the images of the on-site safety measures, and great convenience is provided for operation and maintenance personnel to manage the safety measures.
(4) And (5) practicability. The invention is applied to the inspection link after the safety measure arrangement is finished, and can well solve the problems of wrong safety measure arrangement, non-standard arrangement and the like; use the security measure inspection link in overhauing the working process, can solve well and mark the tablet and be bumped by the staff, remove, blow down the scheduling problem by the wind, guarantee all kinds of correctness that mark the tablet in the maintenance work, get rid of the potential safety hazard that exists in the safety in production, have fine practicality.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic diagram of a U-net convolutional network structure according to the present invention;
FIG. 3 is a schematic diagram of the segmentation result of the U-net convolutional network of the present invention;
fig. 4 is a schematic diagram of the structure of the Lenet-5 convolution network of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example 1: as shown in the figure, a flow chart of the deep learning-based substation nameplate recognition method is shown in fig. 1, and the method comprises the following steps:
s100, collecting and inputting a safety measure image of a working site, and simultaneously carrying out image preprocessing;
the method specifically comprises the following steps:
and carrying out image acquisition on the safety measures of the working site to obtain original images of the safety measures of the working site. And then inputting the acquired safety measure images into an intelligent identification system. The input images may have the problems of non-uniform size, too high pixels, too much noise and the like, so that the training process of the model and the segmentation performance of the U-net convolution network are influenced. Firstly, preprocessing an input image, enhancing the contrast of the original safety measure image through Gaussian filtering, edge detection and the like, eliminating noise, and uniformly cutting the input image into images with the size of 512 × 512 pixels.
S200, carrying out image segmentation on the preprocessed safety measure image by utilizing a U-net convolution network, segmenting pixel blocks of an upper cabinet door, a middle cabinet door, a lower cabinet door, a nameplate, an interval name and a barrier of the switch cabinet, and respectively determining the corresponding positions of the pixel blocks;
the method specifically comprises the following steps:
the U-net convolutional network structure comprises a convolutional layer, a feature fusion layer, a down-sampling layer and an up-sampling layer, and has strong feature extraction capability. The left coding part of the U-net convolutional network structure adopts a convolutional layer and a downsampling layer to obtain high-dimensional features of an image, and the right decoding part adopts an upsampling layer and a convolutional layer to recover high-level abstract features of a feature map lost due to upsampling. The schematic diagram of the U-net convolution network structure is shown in FIG. 2.
Characteristic information is extracted from the security measure image with the size of 512 pixels by adopting a coder and a decoder. The left coding part of the U-net convolution network structure comprises 4 convolution layers and 4 down-sampling layers, each convolution layer comprises two convolution operations, the down-sampling layers adopt the maximum pooling operation, and the sizes of images after the 4 down-sampling operations are respectively 256 × 256, 128 × 128, 64 × 64 and 32 × 32. After obtaining 512-dimensional high-dimensional features, the network also performs two convolution operations to obtain 1024-dimensional features. The right decoding part comprises 4 upsampling layers and 4 convolutional layers, and the feature graph obtained after each upsampling is subjected to feature graph fusion and splicing with the same scale as the network part of feature extraction. The first 3 convolutional layers of the 4 convolutional layers all comprise two convolution operations, the last 1 convolutional layer comprises 3 convolution operations for restoring the feature map to the original size, a target region segmentation map corresponding to a specific region in the original image is output, and the output image size is 512 x 512 pixels.
Wherein the convolution and upsampling employ a Relu activation function.
And (4) carrying out multiple segmentation on the preprocessed safety measure images by utilizing a U-net convolution network, and finally obtaining the upper, middle and lower cabinet doors of the switch cabinet, the nameplates, the space names and the blocked region segmentation maps. Wherein different areas are represented by different color pixel blocks, and the segmentation result of the U-net network is shown in figure 3. While the model also obtains the position of each pixel block.
Step S300, utilizing a Lenet-5 convolution network to identify and classify nameplates and interval names;
the method specifically comprises the following steps:
and identifying and classifying the contents of the segmented nameplate images and the interval name images by adopting a Lenet-5 convolution network. The Lenet-5 convolutional network includes an Input layer (Input layer), a convolutional layer (C1 layer), a pooling layer (S2 layer), a convolutional layer (C3 layer), a pooling layer (S4 layer), a convolutional layer (C5 layer), a full link layer (F6 layer), and an Output layer (Output layer), and has a structure shown in fig. 4.
The Input layers (Input layers) uniformly normalize the Input image size to 32 × 32.
The input image in the convolution layer (C1 layer) was 32 × 32, the convolution kernel size was 5 × 5, the convolution kernel type was 6, and the SAME fill pattern used in the convolution operation outputted 6 characteristic images of 28 × 28.
In the pooling layer (S2 layer), the input images are 28 × 28, the sampling area is 2 × 2, the sampling type is 6, the sampling mode is that 4 input images are added and multiplied by training parameters and added with training bias, and then output through a Sigmoid function, and 6 characteristic images of 14 × 14 are output.
The input image in the convolution layer (C3 layer) is 14 × 14, the convolution kernel size is 5 × 5, the convolution kernel type is 16, and the SAME fill pattern used in the convolution operation outputs 16 10 × 10 characteristic images.
In the pooling layer (S4 layer), the input images are 10 × 10, the sampling area is 2 × 2, the sampling type is 16, the sampling mode is that 4 input images are added and multiplied by training parameters and added with training bias, and then the result is output through a Sigmoid function, and 16 characteristic images of 5 × 5 are output.
The input image in the convolution layer (C5 layer) is 5 × 5, the convolution kernel size is 5 × 5, the convolution kernel type is 120, and the SAME fill pattern used in the convolution operation is output as a 120-dimensional vector.
The input of the fully-connected layer (F6 layer) is a 120-dimensional vector, and the result is output through a sigmoid function by calculating the dot product between the input vector and the weight vector and adding the offset. The output is an 84-dimensional vector.
The Output layer (Output layer) has n neurons in total, represents n numbers of 0 to n-1, and corresponds to different outputs.
The convolution layer is used for extracting image characteristic data, utilizing convolution kernel to slide in the image and convolving the image characteristic data with image local data to generate a characteristic diagram. When the convolution kernel traverses the input image, the calculation mode is expressed as
In the formula (10), xl jIs the value of the jth neuron in the l layer; pjA convolution receptive field region for the jth neuron; x is the number ofl-1 iIs the value of the ith neuron in layer l-1; k is a radical ofl jThe value of the j-th neuron convolution kernel in the l-th layer; bl jIs the bias value of the jth neuron in the ith layer; f (x) is an activation function.
The role of the pooling layer is to aggregate the feature data and reduce the dimensionality of the feature data. And the pooling layer is used for carrying out dimension reduction processing on a plurality of characteristic planes generated after the convolution layer, so that the number of training parameters is reduced, and the operation efficiency is improved. The pooling method includes maximum pooling and mean pooling, and the calculation can be expressed as
In formula (11), xl jIs the value of the jth neuron in the l layer; x is the number ofl-1 iIs the value of the ith neuron in layer l-1; pjA convolution receptive field region for the jth neuron; bl jIs the bias value of the jth neuron in the ith layer; pool (x) is a sampling function.
The output layers are connected by a Radial Basis Function (RBF) network. The RBF is calculated in the manner of
In the formula (12), yiIs the ith neuron of the output layer; x is the number ofjIs the jth neuron in the fully-connected layer; omegaijThe weight value between the jth neuron of the full connection layer and the ith neuron of the output layer is obtained.
And inputting the segmented nameplate and interval name images into a Lenet-5 convolutional network, passing through a plurality of convolutional layers and pooling layers, and inputting the images into a full connection layer. And the full connection layer combines a plurality of groups of data characteristics after the pooling layer into a group of data to be output, and identifies the type and the interval name of the nameplate. And inputting the interval name segmentation graph into a Lenet-5 convolution network structure, and finally identifying an interval name 'Chengdi A600 line switch'.
In addition, the same nameplates or the same interval names are classified into one class, and finally the field safety measure schematic diagram is obtained.
S400, determining the types and the number of nameplates on an upper cabinet door, a middle cabinet door, a lower cabinet door and a barrier of the switch cabinet;
the method specifically comprises the following steps:
and (4) combining the positions of the pixel blocks of the areas obtained in the step (S200) and the interval names and the nameplate types identified in the step (S300) to obtain the types and the number of the nameplates on the upper cabinet door, the middle cabinet door, the lower cabinet door and the barrier of the interval cabinet. Specifically, images preprocessed by field safety measures are cut through a U-net convolutional network, identification and classification are carried out on nameplates and interval names through a Lenet-5 convolutional network, a nameplate which is hung on a cabinet door of a Doudian A600 line switch cabinet and works at the place is obtained through combination, switching-on is forbidden through the cabinet door hanging in the switch cabinet, and people work! "Nameplate and" stop, high pressure hazard! "sign board, hang two on the block" stop step, high pressure danger! "sign board and one" from here go in and out "sign board, two sides interval upper cabinet door and middle cabinet door hang" stop step, high pressure danger! "Nameplate".
Step S500, converting the safety measure image into a safety measure text and outputting the safety measure text;
the method specifically comprises the following steps:
and (5) through the types and the number of the nameplates on the upper cabinet door, the middle cabinet door, the lower cabinet door and the barrier of the switch cabinet obtained in the step (S400), generating texts for arranging safety measures by utilizing a uniform safety measure arrangement template, and finishing output.
In a specific embodiment, the generated security measure content is as follows: a signboard for working at the place is hung on the upper cabinet door of the second city A600 line switch cabinet; the operating hole of the switch trolley of the urban two A600 line switch cabinet is hung to 'forbid switching on', and someone works! "sign board; in the city two A600 line switch cabinet, the cabinet door is suspended to stop the step, the high voltage danger! "sign board; two sides of the space between the two city A600 lines are provided with a barrier, and a step-stopping and high-pressure danger is hung on the barrier! A signboard is hung at the blocking entrance and enters and exits from the blocking entrance; the two sides of the urban A600 line run at intervals, and the upper cabinet door and the middle cabinet door are hung at intervals at the two sides to form a step stop, high pressure danger! "Nameplate".
And step S600, comparing with the safety measures in the work ticket to check the correctness of the safety measures.
The method specifically comprises the following steps:
and comparing the converted and output safety measure text with the safety measure text in the work ticket to check the correctness of the safety measure. And if the field safety measures are incorrect, rearranging the safety measures.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A transformer substation safety measure identification method based on deep learning is characterized by comprising the following steps:
s100, collecting and inputting safety measure images of a working site, and simultaneously carrying out image preprocessing;
s200, carrying out image segmentation on the preprocessed safety measure image by utilizing a U-net convolution network, segmenting pixel blocks of an upper cabinet door, a middle cabinet door, a lower cabinet door, a nameplate, an interval name and a barrier of the switch cabinet, and respectively determining the corresponding positions of the pixel blocks;
step S300, utilizing a Lenet-5 convolution network to identify and classify nameplates and interval names;
s400, determining the types and the number of nameplates on an upper cabinet door, a middle cabinet door, a lower cabinet door and a barrier of the switch cabinet;
step S500, converting the safety measure image into a safety measure text and finishing output;
and step S600, comparing with the safety measures in the work ticket to check the correctness of the safety measures.
2. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S100 specifically includes: carrying out image acquisition on the safety measures of the working site, and then inputting the acquired safety measure images into an intelligent identification system;
because the input images may have the related problems of non-uniform size, too high pixels and too large noise, the training process of the model and the segmentation performance of the U-net convolution network are influenced; firstly, preprocessing an input image, enhancing the contrast of an original safety measure image and eliminating noise by Gaussian filtering and edge detection related technologies, and uniformly cutting the input image into an image with the size of 512 × 512 pixels.
3. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S200 specifically includes: the U-net convolutional network structure comprises a convolutional layer, a feature fusion layer, a down-sampling layer and an up-sampling layer, and has strong feature extraction capability; the left coding part of the U-net convolutional network structure adopts a convolutional layer and a downsampling layer to obtain high-dimensional characteristics of an image, and the right decoding part adopts an upsampling layer and a convolutional layer to recover high-level abstract characteristics of a characteristic diagram lost due to upsampling;
extracting characteristic information from a security measure image with the size of 512 pixels by adopting an encoder and a decoder; the left coding part of the U-net convolution network structure comprises 4 convolution layers and 4 down-sampling layers, each convolution layer comprises two convolution operations, the down-sampling layers adopt maximum pooling operation, and the sizes of the 4 down-sampled images are respectively 256 × 256, 128 × 128, 64 × 64 and 32 × 32; after the 512-dimensional high-dimensional features are obtained, the network also performs convolution operation twice to obtain 1024-dimensional features; the right decoding part comprises 4 upsampling layers and 4 convolutional layers, and the feature graph obtained after each upsampling is fused and spliced with the feature graph with the same scale of the network part for feature extraction; the first 3 convolutional layers in the 4 convolutional layers respectively comprise two convolution operations, the last 1 convolutional layer comprises 3 convolution operations for restoring the feature map to the size of the original image, a target region segmentation map corresponding to a specific region in the original image is output, and the size of the output image is 512 x 512 pixels;
wherein convolution and upsampling employ Relu activation functions:
and (3) carrying out multiple segmentation on the preprocessed safety measure image by utilizing a U-net convolution network to finally obtain region segmentation maps of upper, middle and lower cabinet doors, nameplates, interval names and obscurations of the switch cabinet, wherein different regions are represented by pixel blocks with different colors, and the corresponding positions of the pixel blocks are obtained at the same time.
4. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S300 specifically includes: identifying and classifying the contents of the segmented nameplate images and the interval name images by adopting a Lenet-5 convolutional network; the Lenet-5 convolutional network comprises an Input layer (Input layer), a convolutional layer (C1 layer), a pooling layer (S2 layer), a convolutional layer (C3 layer), a pooling layer (S4 layer), a convolutional layer (C5 layer), a full-link layer (F6 layer) and an Output layer (Output layer);
the Input layer (Input layer) uniformly normalizes the Input image size to 32 × 32;
the input image in the convolution layer (C1 layer) is 32 × 32, the convolution kernel size is 5 × 5, the convolution kernel type is 6, and the SAME filling mode adopted in the convolution operation outputs 6 characteristic images of 28 × 28;
the input images in the pooling layer (S2 layer) are 28 × 28, the sampling area is 2 × 2, the sampling type is 6, the sampling mode is that 4 input images are added, multiplied by training parameters and added with training bias, and output through a Sigmoid function, and 6 characteristic images of 14 × 14 are output;
the input images in the convolution layer (C3 layer) are 14 × 14, the convolution kernel size is 5 × 5, the convolution kernel type is 16, and the SAME filling mode adopted in the convolution operation outputs 16 characteristic images of 10 × 10;
the input images in the pooling layer (S4 layer) are 10 x 10, the sampling area is 2 x 2, the sampling type is 16, the sampling mode is that 4 input images are added, multiplied by training parameters and added with training bias, and then the training bias is output through a Sigmoid function, and 16 characteristic images of 5 x 5 are output;
the input image in the convolution layer (C5 layer) is 5 × 5, the convolution kernel size is 5 × 5, the convolution kernel type is 120, and the SAME filling mode adopted in the convolution operation is output as a 120-dimensional vector;
the input of the full connection layer (F6 layer) is 120-dimensional vector, the dot product between the input vector and the weight vector is calculated, and the result is output through sigmoid function after the offset is added; the output is 84-dimensional vector;
the Output layer (Output layer) has n neurons, which respectively represent n numbers from 0 to n-1 and respectively correspond to different outputs;
the convolution layer is used for extracting image characteristic data, sliding in an image by utilizing a convolution kernel and performing convolution with image local data to generate a characteristic graph; when the convolution kernel traverses the input image, the calculation mode is expressed as:
in the formula (2), xl jIs the value of the jth neuron in the l layer; pjA convolution receptive field region for the jth neuron; x is the number ofl-1 iIs the value of the ith neuron in layer l-1; k is a radical ofl jThe value of the j-th neuron convolution kernel in the l-th layer; bl jIs the bias value of the jth neuron in the ith layer; (x) is an activation function;
the pooling layer is used for aggregating the feature data and reducing the dimensionality of the feature data; the pooling layer is used for carrying out dimension reduction processing on a plurality of characteristic planes generated after the convolution layer, so that the number of training parameters is reduced, and the operation efficiency is improved; the pooling method includes maximum pooling and mean pooling, and the calculation can be expressed as:
in the formula (3), xl jIs the value of the jth neuron in the l layer; x is the number ofl-1 iIs the value of the ith neuron in layer l-1; pjA convolution receptive field region for the jth neuron; bl jIs the bias value of the jth neuron in the ith layer; pool (x) is a sampling function;
the output layers are connected by adopting a Radial Basis Function (RBF) network; the RBF is calculated in the following way:
in the formula (4), yiIs the ith neuron of the output layer; x is the number ofjIs the jth neuron in the fully-connected layer; omegaijThe weight value between the jth neuron of the full connection layer and the ith neuron of the output layer is obtained;
inputting the segmented nameplate and interval name images into a Lenet-5 convolutional network, passing through a plurality of convolutional layers and pooling layers, and inputting the images into a full connection layer; and the full connection layer combines a plurality of groups of data characteristics after the pooling layer into a group of data to be output, and identifies the type and the interval name of the nameplate.
5. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S400 specifically includes: combining the positions of the pixel blocks in the areas obtained in the step S200 and the interval names and the nameplate types identified in the step S300 to obtain the types and the number of the nameplates on the upper, middle and lower cabinet doors and the barrier of the interval cabinet; specifically, pixel blocks containing the nameplate 1 and the nameplate 2 in the pixel block range of the cabinet door on the switch cabinet are obtained through the pixel block positions, and meanwhile, the types of the nameplate 1 and the nameplate 2 are identified in the step S300, so that the types and the number of the nameplates of the cabinet door on the switch cabinet are obtained.
6. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S500 specifically includes: and (5) through the types and the number of the nameplates on the upper cabinet door, the middle cabinet door, the lower cabinet door and the barrier of the switch cabinet obtained in the step (S400), generating texts for arranging safety measures by utilizing a uniform safety measure arrangement template, and finishing output.
7. The deep learning-based substation security measure identification method according to claim 1, characterized in that: the step S600 specifically includes: comparing the converted and output safety measure text with the safety measure text in the work ticket so as to check the correctness of the safety measure; and if the field safety measures are incorrect, rearranging the safety measures.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011117989.6A CN112257547A (en) | 2020-10-19 | 2020-10-19 | Transformer substation safety measure identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011117989.6A CN112257547A (en) | 2020-10-19 | 2020-10-19 | Transformer substation safety measure identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112257547A true CN112257547A (en) | 2021-01-22 |
Family
ID=74244807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011117989.6A Pending CN112257547A (en) | 2020-10-19 | 2020-10-19 | Transformer substation safety measure identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112257547A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115620241A (en) * | 2022-12-15 | 2023-01-17 | 南京电力自动化设备三厂有限公司 | Image processing-based field safety measure identification method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825329A (en) * | 2016-03-14 | 2016-08-03 | 国网福建省电力有限公司 | Safety and quality inspection method based on remote working site monitoring system |
CN108009547A (en) * | 2017-12-26 | 2018-05-08 | 深圳供电局有限公司 | A kind of nameplate recognition methods of substation equipment and device |
CN108833831A (en) * | 2018-06-15 | 2018-11-16 | 陈在新 | A kind of power construction intelligent safety monitor system |
CN109784228A (en) * | 2018-12-28 | 2019-05-21 | 苏州易助能源管理有限公司 | A kind of photovoltaic plant identifying system and method based on image recognition technology |
WO2019220474A1 (en) * | 2018-05-15 | 2019-11-21 | Universita' Degli Studi Di Udine | Apparatus and method to classify full waveform data from retro-flected signals |
-
2020
- 2020-10-19 CN CN202011117989.6A patent/CN112257547A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825329A (en) * | 2016-03-14 | 2016-08-03 | 国网福建省电力有限公司 | Safety and quality inspection method based on remote working site monitoring system |
CN108009547A (en) * | 2017-12-26 | 2018-05-08 | 深圳供电局有限公司 | A kind of nameplate recognition methods of substation equipment and device |
WO2019220474A1 (en) * | 2018-05-15 | 2019-11-21 | Universita' Degli Studi Di Udine | Apparatus and method to classify full waveform data from retro-flected signals |
CN108833831A (en) * | 2018-06-15 | 2018-11-16 | 陈在新 | A kind of power construction intelligent safety monitor system |
CN109784228A (en) * | 2018-12-28 | 2019-05-21 | 苏州易助能源管理有限公司 | A kind of photovoltaic plant identifying system and method based on image recognition technology |
Non-Patent Citations (3)
Title |
---|
保罗·加莱奥内(PAOLO GALEONE): "《TensorFlow 2.0神经网络实践》", 31 July 2020, 机械工业出版社 * |
卢誉声: "《移动平台深度神经网络实战 原理、架构与优化》", 31 January 2020, 机械工业出版社 * |
麦俊佳等: "基于深度学习的输电线路航拍照片目标检测应用", 《广东电力》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115620241A (en) * | 2022-12-15 | 2023-01-17 | 南京电力自动化设备三厂有限公司 | Image processing-based field safety measure identification method and device |
CN115620241B (en) * | 2022-12-15 | 2023-04-18 | 南京电力自动化设备三厂有限公司 | Image processing-based field safety measure identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112967243B (en) | Deep learning chip packaging crack defect detection method based on YOLO | |
CN106219367B (en) | A kind of elevator O&M monitoring method based on intelligent vision light curtain | |
CN109740463A (en) | A kind of object detection method under vehicle environment | |
CN111339883A (en) | Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene | |
CN106407903A (en) | Multiple dimensioned convolution neural network-based real time human body abnormal behavior identification method | |
CN104268588B (en) | Railway wagon brake shoe pricker loses the automatic testing method of failure | |
CN109840523A (en) | A kind of municipal rail train Train number recognition algorithm based on image procossing | |
CN111738336A (en) | Image detection method based on multi-scale feature fusion | |
CN108275530A (en) | A kind of elevator safety method for early warning based on machine learning | |
Ozcelik et al. | A vision based traffic light detection and recognition approach for intelligent vehicles | |
He et al. | Obstacle detection in dangerous railway track areas by a convolutional neural network | |
CN112257547A (en) | Transformer substation safety measure identification method based on deep learning | |
CN111080599A (en) | Fault identification method for hook lifting rod of railway wagon | |
CN106355187A (en) | Application of visual information to electrical equipment monitoring | |
CN104331708B (en) | A kind of zebra crossing automatic detection analysis method and system | |
CN113255519A (en) | Crane lifting arm identification system and multi-target tracking method for power transmission line dangerous vehicle | |
CN109002753A (en) | One kind being based on the cascade large scene monitoring image method for detecting human face of convolutional neural networks | |
CN113179389A (en) | System and method for identifying crane jib of power transmission line dangerous vehicle | |
CN113158954A (en) | Automatic traffic off-site zebra crossing area detection method based on AI technology | |
Aghdasi et al. | Automatic licence plate recognition system | |
Gong et al. | RETRACTED ARTICLE: Anomaly Detection of High-Speed Railway Catenary Damage | |
CN116503809A (en) | Post-processing method for intelligent factory behavior wearing false alarm filtering | |
Batapati et al. | Video analysis for traffic anomaly detection using support vector machines | |
Revathi et al. | Indian sign board recognition using image processing techniques | |
Tayo et al. | Vehicle license plate recognition using edge detection and neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210122 |