CN115393691A - Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm - Google Patents

Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm Download PDF

Info

Publication number
CN115393691A
CN115393691A CN202211079374.8A CN202211079374A CN115393691A CN 115393691 A CN115393691 A CN 115393691A CN 202211079374 A CN202211079374 A CN 202211079374A CN 115393691 A CN115393691 A CN 115393691A
Authority
CN
China
Prior art keywords
mask
target
pressing plate
loss
regression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211079374.8A
Other languages
Chinese (zh)
Inventor
毛华敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Yangtze Power Co Ltd
Original Assignee
China Yangtze Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Yangtze Power Co Ltd filed Critical China Yangtze Power Co Ltd
Priority to CN202211079374.8A priority Critical patent/CN115393691A/en
Publication of CN115393691A publication Critical patent/CN115393691A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

A Mask _ RCNN algorithm-based automatic detection method for the switching state of a relay protection pressing plate belongs to the field of relay protection, and comprises the following steps: step 1: carrying out image recognition and image segmentation algorithm design based on a Mask-RCNN algorithm: step 2, making a data set: firstly, obtaining original data; secondly, marking the opening and closing state of the pressing plate: thirdly, making a data set; step 3, training a model: under a Tensorflow framework, recognition model files with different accuracies can be obtained by adjusting the learning rate and the epoch value in the hyper-parameters. The intelligent substation secondary circuit pressure plate state monitoring system solves the problems that the acquisition and state monitoring of the transformer substation secondary circuit pressure plate state data are difficult, the workload of manual inspection is large and the like, and greatly improves the working efficiency of operation and maintenance of the intelligent substation.

Description

Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm
Technical Field
The invention belongs to the field of relay protection, and particularly relates to an automatic detection method for the switching state of a relay protection pressing plate based on a Mask _ RCNN algorithm.
Background
In the current power production, the switching requirement of the protection pressing plate is determined according to the actual operation requirement of equipment, and production personnel face massive pressing plate states, if effective supervision is lacked, the situation that the pressing plate is operated by mistake is probably caused. The computer can be used for analyzing massive images of the protective pressing plate, determining the on-off state of the pressing plate, and transmitting accurate pressing plate information to the equipment state management system, so that operators can better master the equipment state.
In recent years, the deep learning algorithm is widely applied to the fields of image segmentation, image recognition, image understanding and the like of massive images. Common target detection algorithms based on deep learning include fast R-CNN algorithm, R-FCN algorithm, mask-RCNN algorithm and the like. Wherein the Mask-RCNN algorithm has been widely applied since 3 months in 2018. The Mask-RCNN algorithm simultaneously solves the problems of image identification and image segmentation. The pyramid residual error network and the application of ROI align improve the identification precision.
The defects and shortcomings of the prior art are as follows:
an algorithm in the patent 'detection method, device and processing equipment for electric protection pressing plate of transformer substation' is used for identifying and dividing objects, and is complex and low in detection efficiency.
In the patent, "transformer substation secondary circuit protection pressing plate detection system and method based on artificial intelligence" an image segmentation algorithm of Mean Shift is adopted to remove a background and extract a target, and an HOG + SVM algorithm is adopted to perform recognition. The algorithm is complex and the detection efficiency is reduced.
In the patent of yolov 5-based intensive multi-target detection method for checking hard pressing plates of relay protection screen cabinets, a yolov5s network structure is adopted by a pytorch to train a pressing plate identification model, the function of image segmentation is omitted, and the pressing plate area cannot be accurately positioned.
Disclosure of Invention
In view of the technical problems in the background art, the automatic detection method for the switching-on and switching-off state of the relay protection pressing plate based on the Mask _ RCNN algorithm solves the problems that the acquisition and state monitoring of the state data of the secondary circuit pressing plate of the transformer substation are difficult, the workload of manual inspection is large and the like, and greatly improves the working efficiency of operation and maintenance of an intelligent station.
In order to solve the technical problems, the invention adopts the following technical scheme to realize:
a method for automatically detecting the switching state of a relay protection pressing plate based on a Mask _ RCNN algorithm comprises the following steps:
step 1: carrying out image recognition and image segmentation algorithm design based on a Mask-RCNN algorithm:
step 1.1: the Mask-RCNN algorithm framework is composed of a backbone network, a regional recommendation network, an identification branch, a positioning branch and a Mask branch; the backbone network extracts image convolution characteristics and generates characteristic graphs with different scales, and then sends the characteristic graphs into the regional recommendation network; a foreground suggestion region is generated by a regional recommendation network, and then subsequent identification, positioning and mask branches are sent to perform classification regression calculation;
step 1.2: classification and regression were performed: selecting a cut-in feature map according to the size of the target foreground proposal area by using the grading standard of formula (1); equation (1) is as follows:
Figure BDA0003833096760000021
wherein: wh represents the area of the feature map;
k 0 representing the level where the foreground proposal area with the area wh should be;
k 0 setting the initial value to be 4, and setting the size of the foreground proposal area to be smaller than 224 × 224;
step 1.3: mask branch: sending the feature graphs of different levels after feature fusion into a RoI Align (pooling layer) for pooling, and sending the feature graphs into a full convolution network through an FPN head (inverted pyramid structure) to generate a mask; sending the category and coordinate information obtained in the classification regression branch into a four-layer convolution network for activation by using a Relu function, and setting the pixel point of the target category to be 1; then, the propagation is carried out in the reverse direction through a deconvolution layer; selecting feature maps with different scales by targets with different scales according to rules, finally amplifying mask information to an original image and outputting a mask image;
step 1.4: designing a loss function;
step 2, making a data set:
firstly, obtaining original data;
secondly, marking the opening and closing state of the pressing plate:
thirdly, making a data set;
step 3, training a model: under the Tensorflow (a deep learning framework which is recognized by the industry), recognition model files with different accuracies can be obtained by adjusting the learning rate and the epoch (meaning that all samples are trained once) value in the hyper-parameters.
Preferably, in step 1.4, the Mask-RCNN algorithm is used for completing target identification, positioning and Mask image; the error during training is generated by a regional recommendation network, a classification regression branch and a mask branch, and the error of the 3 parts is combined during the design of the loss function to obtain:
L=L Rcls +L Rreg +L Fcls +L Freg +L mask (2)
wherein: l is Rcls Representing a RPN network classification loss; l is Rreg Representing a regional recommended network regression loss; l is Fcls Representing a target classification branch penalty; l is Freg Representing a target regression branch loss; l is mask Representing target mask branch penalty;
the classification loss function consists of regional recommended network classification and target classification loss;
the regression loss consists of a regional recommended network regression loss target regression loss;
1) Loss of classification
Figure BDA0003833096760000031
The anchor generated by the regional recommendation network classification is only divided into foreground and background, the label of the foreground is 1, and the label of the background is 0.L is cls (p i ,p i * ) Is the logarithmic loss of both the target and non-target classes, in the target class, L cls (p i ,p i * ) Is cross entropy loss for multiple classes;
2) Regression loss:
selection of L 1 Norm as a loss function of regression; to ensure smoothness of the loss function, L is 1 Transformation of norm function into segmentation functionThe expression is as follows:
Figure BDA0003833096760000032
the regression loss function is shown in the following equation (5)
Figure BDA0003833096760000033
Wherein: t is t i ={t x ,t y ,t w ,t h Denotes the position of the predicted grab frame; t is t i * ={t x ,t y ,t w ,t h Denotes the actual grab frame position; l is reg (t i ,t i * )=smooth(t i ,t i * )。
(3) mask Branch loss function:
classifying and regressing branches at the target to obtain k target detection results, and outputting k a multiplied by a matrixes by target segmentation, wherein if the matrix is a target, the matrix is set to be 1, and if the matrix is not a target, the matrix is set to be 0; selecting a logarithmic loss function to measure a target segmentation result; then for an a x a candidate box, equation (6) for the penalty function is as follows:
Figure BDA0003833096760000034
wherein: y is ij Wherein, the actual label value of the pixel (x, y) in the a x a candidate frame;
Figure BDA0003833096760000035
-a x a candidate in-frame pixel point (x, y) predicted label value of kth category.
Preferably, in step 2, the method for making the data set comprises:
firstly, obtaining original data; and collecting the images of the pressing plate of the protection device by a protection disc in a factory, removing the blurred images and screening out the images meeting the requirements.
Secondly, marking the opening and closing state of the pressing plate: and manually marking different opening and closing states of the pressing plate one by using labeme software. ON represents pressing plate closed, OFF represents pressing plate divided. After labelme labeling, json files which correspond to the pictures one by one can be obtained, and the files contain the vertex coordinates and the categories of the positioning frames manually drawn in the images.
Thirdly, making a data set: and the json file is utilized to contain the picture number, the category and the coordinate information of the original picture. Then the codes are converted into Labelme _ json folders by using the Labelme self-contained dataset. And then classifying and sorting the files to obtain four necessary files of the data sets, such as cv2_ mask, json, labelme _ json, pic and the like. The training data volume of each category obtained by final sorting is shown in table 1. Using these data, a pressing plate split-combination model is trained.
Preferably, in step 4, an epoch means that all data are sent into the network to complete a forward calculation and backward propagation process; during training, the learning rate is respectively set as: 0.001,0.0009,0.0008,0.0007,0.0005; the epochs are respectively set as: 20 30, 40, 50, 60; carrying out model training according to the parameters, and finally selecting a model with the best performance, wherein the learning rate is 0.001, and the epoch is set as 50; setting main parameters: learning rate of 0.001, epoch of 50, batch size of 1, anchor scale of (128, 256, 512), and aspect _ ratio of (1:3, 1:1, 3:1); and recording the precision of the test model after the training is finished.
This patent can reach following beneficial effect:
compared with the prior art, the invention has the beneficial effects that: the Mask _ RCNN algorithm is applied to the field of platen recognition for the first time, and the model obtained through training can be used for image recognition and image segmentation at the same time. The automatic detection method of the switching state of the relay protection pressing plate based on the Mask _ RCNN algorithm improves the identification precision of the pressing plate state and provides technical support for automatic inspection of the robot. The problems that the state data of the secondary circuit pressing plate of the transformer substation are difficult to acquire and monitor, the workload of manual inspection is large and the like are solved, the operation and maintenance working efficiency of the intelligent station is greatly improved, and the operation and maintenance period of the transformer substation is obviously shortened; meanwhile, the misoperation risk is reduced, and the on-site operation maintenance, overhaul operation and professional management level are improved.
Drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1 is a flow chart of platen status identification according to the present invention;
FIG. 2 is a diagram of a Mask-RCNN network structure according to the present invention;
FIG. 3 is a diagram of the classification and regression branches of the present invention;
FIG. 4 is a diagram of the mask branch of the present invention;
FIG. 5 is an effect diagram of the recognition situation of the opening and closing state of the single pressing plate according to the present invention;
fig. 6 is an effect diagram of the identification condition of the pressing plate in the opening and closing state of the invention.
Detailed Description
Example 1:
the preferred scheme is as shown in fig. 1 to 4, and an automatic detection method for the switching state of a relay protection pressing plate based on a Mask _ RCNN algorithm comprises the following steps:
step 1: carrying out image recognition and image segmentation algorithm design based on a Mask-RCNN algorithm:
step 1.1: the Mask-RCNN algorithm framework is composed of a backbone network (backbone), a regional recommendation network (RPN), an identification branch, a positioning branch and a Mask branch; the backbone network extracts image convolution characteristics and generates characteristic graphs with different scales, and then sends the characteristic graphs into the regional recommendation network; a foreground suggestion Region (ROI) is generated by a regional recommendation network, and then subsequent identification, positioning and mask branches are sent to perform classification regression calculation;
step 1.2: classification and regression: the network structure of classification and bounding box regression branches in the Mask-RCNN algorithm is shown in FIG. 3, in the Mask-RCNN algorithm, because a backbone network is combined with the algorithm structure of a feature pyramid, multi-layer scale feature maps of different levels can be output, and the feature map of each level after feature fusion must be sent to a RoI Align layer for processing; then, the features of the positive sample are further extracted for classification, and meanwhile, the regression frame needs to be further corrected, so that a more accurate target is achieved;
the region recommendation network (RPN) outputs multi-scale prediction regions, so when performing classification regression, it is critical that foreground suggestion Regions (ROIs) of different sizes select which scale feature map. A feature map with 4 scales [ P2, P3, P4, P5] is provided, and the cut-in feature map can be selected according to the size of the target foreground proposal Region (ROI) by using the grading standard of formula (1).
Figure BDA0003833096760000051
Wherein: wh represents the area of the feature map;
k 0 representing a level at which a foreground proposed Region (ROI) of area wh should be located;
k 0 setting the initial value to 4, and the size of the foreground proposed Region (ROI) to be smaller than 224 × 224, the foreground proposed region feature map should be cut out from the feature map with higher resolution, and the higher resolution feature map is deconvolved onto the input image, so that the receptive field is larger, and the accuracy of the small-size object can be increased.
Step 1.3: mask branch: the network structure of the Mask branch in the Mask-RCNN algorithm is shown in fig. 4, and the feature maps of different levels after feature fusion are sent to a RoI Align (pooling layer) for pooling, and then sent to a Full Convolution Network (FCN) through an FPN head (inverted pyramid structure) to generate a Mask.
Sending the category and coordinate information obtained in the classification regression branch into a four-layer convolution network for activation by using a Relu function, and setting the pixel point of the target category to be 1; then, the reverse propagation is carried out through a reverse convolution layer; and selecting the feature maps with different scales by the targets with different scales according to rules, finally amplifying the mask information to the original image and outputting the mask image.
Step 1.4: designing a loss function: the Mask-RCNN algorithm can complete target identification, positioning and Mask image. The error during training is mainly generated by a region recommendation network (RPN), a classification regression branch and a mask branch, and compared with the fast-RCNN, the error during training is only generated by adding a loss function of one mask branch. The error of these 3 parts should be combined in designing the loss function to get:
L=L Rcls +L Rreg +L Fcls +L Freg +L mask (2)
wherein: l is Rcls Representing a RPN network classification loss; l is Rreg Representing the RPN network regression loss; l is Fcls Representing a target classification branch penalty; l is Freg Representing a target regression branch loss; l is mask Representing the target mask branch penalty.
The classification loss function consists of a regional recommended network (RPN) classification and a target classification loss. The regression loss consists of the RPN regression loss and the target regression loss.
1) Loss of classification
Figure BDA0003833096760000061
The anchor generated by the RPN network is divided into foreground and background only, with the label of the foreground being 1 and the label of the background being 0.L is a radical of an alcohol cls (p i ,p i * ) Is the logarithmic loss of both the target and non-target classes, in the target class, L cls (p i ,p i * ) Is a cross-entropy loss for multiple classes.
2) Loss of return
Selection of L 1 The norm is a function of the loss of the regression. To ensure smoothness of the loss function, L is 1 The norm function is transformed into a piecewise function, and the specific expression is as follows:
Figure BDA0003833096760000062
the regression loss function is shown in the following equation (5)
Figure BDA0003833096760000063
Wherein: t is t i ={t x ,t y ,t w ,t h Denotes the bit of the predicted grab framePlacing; t is t i * ={t x ,t y ,t w ,t h Denotes the position of the actual grab frame; l is reg (t i ,t i * )=smooth(t i ,t i * )。
(3) mask Branch loss function
And (4) obtaining k target detection results in the target classification regression branch, outputting k a x a matrixes by target segmentation, setting 1 if the matrix is a target, and setting 0 if the matrix is not a target, so as to form the matrix. And selecting a logarithmic loss function to measure the target segmentation result. Then for an a x a candidate box, the penalty function is as shown in equation (6)
Figure BDA0003833096760000064
Wherein: y is ij The actual label value of a pixel (x, y) in the a multiplied by a candidate frame is obtained;
Figure BDA0003833096760000071
-the label value of the Kth category predicted by the pixel (x, y) within the a x a candidate frame.
Step 2, making a data set:
setting an experimental environment: the experimental environment is a Dell tower type workstation, the operating system is WINDOWS, the programming environment is based on python, the memory is 32G, the GPU is NVIDIA Geforce 1080Ti, the video memory is 11GB, and the video memory rate is 11Gpbs. Tensorflow based on deep learning framework [5] The process is carried out. The software versions of the deep learning development environment are as follows: CUDA 9.0, cuDNN 7.0.5, python3.6. Meanwhile, necessary third-party libraries of python such as Keras, opencv, jupyter notewood, numpy, matplotlib and Imgauge are installed.
Step 2.1: data set preparation:
in the first step, raw data is obtained. And collecting the pressure plate image of the protection device by the protection disc in the factory, removing the fuzzy picture and screening out the image meeting the requirement.
And secondly, marking the opening and closing state of the pressing plate. And manually marking different opening and closing states of the pressing plate one by using labeme software. ON represents pressing plate closed, OFF represents pressing plate divided. After labelme labeling, json files which correspond to the pictures one by one can be obtained, and the files contain the vertex coordinates and the categories of the positioning frames manually drawn in the images.
And thirdly, manufacturing a data set. And the json file is utilized to contain the picture number, the category and the coordinate information of the original picture. Then the codes are converted into Labelme _ json folders by using the Labelme self-contained dataset. And then classifying and sorting the files to obtain four necessary files of the data sets, such as cv2_ mask, json, labelme _ json, pic and the like. The training data volume of each category obtained by final sorting is shown in table 1. Using these data, a platen split-combination model is trained. The data amount table of each state is as follows:
Figure BDA0003833096760000072
TABLE 1 data volume of each state
And 3, training the model.
Training: under a Tensorflow framework, recognition model files with different accuracies can be obtained by adjusting the learning rate and the epoch value in the hyper-parameters. An epoch refers to a process of sending all data into the network to complete a forward calculation and a backward propagation. During training, the learning rate is respectively set as: 0.001,0.0009,0.0008,0.0007,0.0005; epoch is set to: 20 30, 40, 50, 60; model training is carried out according to the parameters, and finally, a model with the best performance is selected, the learning rate is 0.001, and the epoch is set to be 50. Setting main parameters: the learning rate was 0.001, the epoch was 50, the batch _sizewas 1, the scale of the anchor was (128, 256, 512), and the aspect _ ratio was (1:3, 1:1, 3:1). And recording the precision of the test model after the training is finished.
As shown in fig. 5-6, it was experimentally found that: the Mask-RCNN has good identification and division effects, and can accurately identify the ON/OFF state of the pressing plate.
The above-described embodiments are merely preferred embodiments of the present invention, and should not be construed as limiting the present invention, and the scope of the present invention is defined by the claims, and equivalents including technical features described in the claims. I.e., equivalent alterations and modifications within the scope hereof, are also intended to be within the scope of the invention.

Claims (4)

1. A method for automatically detecting the switching state of a relay protection pressing plate based on a Mask _ RCNN algorithm is characterized by comprising the following steps:
step 1: image recognition and image segmentation algorithm design are carried out based on a Mask-RCNN algorithm:
step 1.1: the Mask-RCNN algorithm framework consists of a backbone network, a regional recommendation network, an identification branch, a positioning branch and a Mask branch; the backbone network extracts image convolution characteristics and generates characteristic graphs with different scales, and then sends the characteristic graphs into the regional recommendation network; a foreground suggestion region is generated by a regional recommendation network, and then subsequent identification, positioning and mask branches are sent to perform classification regression calculation;
step 1.2: classification and regression were performed: selecting a cut-in feature map according to the size of the target foreground suggestion area by using the grading standard of the formula (1); equation (1) is as follows:
Figure FDA0003833096750000011
wherein: wh represents the area of the feature map;
k 0 representing the level where the foreground proposal area with the area wh should be;
k 0 setting the initial value to be 4, and setting the size of the foreground proposal area to be smaller than 224 × 224;
step 1.3: mask branch: sending the feature graphs of different levels after feature fusion into RoIAlign for pooling, and sending the feature graphs into a full convolution network through an FPN head to generate a mask; sending the category and coordinate information obtained in the classification regression branch into a four-layer convolution network for activation by using a Relu function, and setting the pixel point of the target category to be 1; then, the propagation is carried out in the reverse direction through a deconvolution layer; selecting feature maps with different scales by targets with different scales according to rules, finally amplifying mask information to an original image and outputting a mask image;
step 1.4: designing a loss function;
step 2, making a data set:
firstly, obtaining original data;
secondly, marking the opening and closing state of the pressing plate:
thirdly, making a data set;
step 3, training a model: under a Tensorflow framework, identification model files with different accuracies can be obtained by adjusting the learning rate and the epoch value in the hyper-parameters.
2. The automatic detection method for the putting-in and putting-out state of the relay protection pressing plate based on the Mask _ RCNN algorithm according to claim 1, characterized in that: in step 1.4, the Mask-RCNN algorithm is used for completing target identification, positioning and Mask image; the error during training is generated by a regional recommendation network, a classification regression branch and a mask branch, and the error of the 3 parts is combined during the design of the loss function to obtain:
L=L Rcls +L Rreg +L Fcls +L Freg +L mask (2)
wherein: l is Rcls Representing a RPN network classification loss; l is Rreg Representing a regional recommended network regression loss; l is Fcls Representing a target classification branch penalty; l is Freg Representing a target regression branch loss; l is mask Representing a target mask branch penalty;
the classification loss function consists of regional recommended network classification and target classification loss;
the regression loss consists of a regional recommended network regression loss target regression loss;
1) Loss of classification
Figure FDA0003833096750000021
With regional recommended network classificationThe generated anchor is only divided into a foreground and a background, wherein the label of the foreground is 1, and the label of the background is 0; l is cls (p i ,p i * ) Is the logarithmic loss of both the target and non-target classes, in the target class, L cls (p i ,p i * ) Is cross entropy loss for multiple classes;
2) Regression loss:
selection of L 1 Norm as a loss function of regression; to ensure smoothness of the loss function, L is 1 The norm function is transformed into a piecewise function, and the expression is as follows:
Figure FDA0003833096750000022
the regression loss function is shown in the following equation (5)
Figure FDA0003833096750000023
Wherein: t is t i ={t x ,t y ,t w ,t h Denotes the position of the predicted grab frame; t is t i * ={t x ,t y ,t w ,t h Denotes the position of the actual grab frame; l is reg (t i ,t i * )=smooth(t i ,t i * );
(3) mask Branch loss function:
classifying and regressing branches at the target to obtain k target detection results, and outputting k a multiplied by a matrixes by target segmentation, wherein if the matrix is a target, the matrix is set to be 1, and if the matrix is not a target, the matrix is set to be 0; selecting a logarithmic loss function to measure a target segmentation result; then for an a x a candidate box, equation (6) for the penalty function is as follows:
Figure FDA0003833096750000024
wherein: y is ij Wherein, the actual label value of the pixel (x, y) in the a x a candidate frame;
Figure FDA0003833096750000025
-the label value of the Kth category predicted by the pixel (x, y) within the a x a candidate frame.
3. The automatic detection method for the putting-in and putting-out state of the relay protection pressing plate based on the Mask _ RCNN algorithm according to claim 1, characterized in that: step 2, the method for making the data set comprises the following steps:
firstly, obtaining original data; collecting the pressure plate image of the protection device by a protection disc in a factory, removing the fuzzy picture and screening out the image meeting the requirement;
secondly, marking the opening and closing state of the pressing plate: manually marking different opening and closing states of the pressing plate one by using labeme software; ON represents that the pressing plate is closed, and OFF represents that the pressing plate is divided; after label of labelme, json file corresponding to the picture one by one can be obtained, and the file comprises vertex coordinates and belonged category of a positioning frame manually drawn in the picture;
thirdly, making a data set: utilizing picture number, category and coordinate information of an original picture contained in a json file; then converting the dataset conversion code carried by Labelme into a Labelme _ json folder; then classifying and sorting the files to obtain four necessary files of cv2_ mask, json, labelme _ json and pic data sets; finally, the training data volume tables of all categories are obtained through sorting; using these data, a platen split-combination model is trained.
4. The automatic detection method for the putting-in and putting-out state of the relay protection pressing plate based on the Mask _ RCNN algorithm according to claim 1, characterized in that: in step 4, an epoch means that all data are sent into the network to complete a forward calculation and backward propagation process; during training, the learning rate is respectively set as: 0.001,0.0009,0.0008,0.0007,0.0005; the epochs are respectively set as: 20 30, 40, 50, 60; carrying out model training according to the parameters, and finally selecting a model with the best performance, wherein the learning rate is 0.001, and the epoch is set as 50; setting main parameters: learning rate of 0.001, epoch of 50, batch size of 1, anchor scale of (128, 256, 512), and aspect _ ratio of (1:3, 1:1, 3:1); and recording the precision of the test model after the training is finished.
CN202211079374.8A 2022-09-05 2022-09-05 Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm Withdrawn CN115393691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211079374.8A CN115393691A (en) 2022-09-05 2022-09-05 Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211079374.8A CN115393691A (en) 2022-09-05 2022-09-05 Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm

Publications (1)

Publication Number Publication Date
CN115393691A true CN115393691A (en) 2022-11-25

Family

ID=84124523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211079374.8A Withdrawn CN115393691A (en) 2022-09-05 2022-09-05 Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm

Country Status (1)

Country Link
CN (1) CN115393691A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953666A (en) * 2023-03-15 2023-04-11 国网湖北省电力有限公司经济技术研究院 Transformer substation field progress identification method based on improved Mask-RCNN

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712118A (en) * 2018-12-11 2019-05-03 武汉三江中电科技有限责任公司 A kind of substation isolating-switch detection recognition method based on Mask RCNN
CN112802005A (en) * 2021-02-07 2021-05-14 安徽工业大学 Automobile surface scratch detection method based on improved Mask RCNN
CN112801974A (en) * 2021-01-27 2021-05-14 南京理工大学 Embedded relay protection pressing plate on-off state identification method and device
CN113409255A (en) * 2021-06-07 2021-09-17 同济大学 Zebra fish morphological classification method based on Mask R-CNN
CN114049621A (en) * 2021-11-10 2022-02-15 石河子大学 Cotton center identification and detection method based on Mask R-CNN

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712118A (en) * 2018-12-11 2019-05-03 武汉三江中电科技有限责任公司 A kind of substation isolating-switch detection recognition method based on Mask RCNN
CN112801974A (en) * 2021-01-27 2021-05-14 南京理工大学 Embedded relay protection pressing plate on-off state identification method and device
CN112802005A (en) * 2021-02-07 2021-05-14 安徽工业大学 Automobile surface scratch detection method based on improved Mask RCNN
CN113409255A (en) * 2021-06-07 2021-09-17 同济大学 Zebra fish morphological classification method based on Mask R-CNN
CN114049621A (en) * 2021-11-10 2022-02-15 石河子大学 Cotton center identification and detection method based on Mask R-CNN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张怡晨: "基于卷积神经网络的舰船检测研究", 《硕士电子期刊》, pages 70 - 71 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953666A (en) * 2023-03-15 2023-04-11 国网湖北省电力有限公司经济技术研究院 Transformer substation field progress identification method based on improved Mask-RCNN

Similar Documents

Publication Publication Date Title
CN110598736B (en) Power equipment infrared image fault positioning, identifying and predicting method
CN109446982B (en) AR glasses-based electric power cabinet pressing plate state identification method and system
CN110647874A (en) End-to-end blood cell identification model construction method and application
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN111401418A (en) Employee dressing specification detection method based on improved Faster r-cnn
CN111539355A (en) Photovoltaic panel foreign matter detection system and detection method based on deep neural network
CN112115807A (en) Transformer substation secondary equipment switching operation anti-error simulation method and system based on image recognition
CN115393691A (en) Automatic detection method for on-off state of relay protection pressing plate based on Mask _ RCNN algorithm
CN116681962A (en) Power equipment thermal image detection method and system based on improved YOLOv5
CN111461121A (en) Electric meter number identification method based on YO L OV3 network
CN112488213A (en) Fire picture classification method based on multi-scale feature learning network
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN118015555A (en) Knife switch state identification method based on visual detection and mask pattern direction vector
CN114387261A (en) Automatic detection method suitable for railway steel bridge bolt diseases
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN113408630A (en) Transformer substation indicator lamp state identification method
CN112132088A (en) Inspection point location missing inspection identification method
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN114155421A (en) Automatic iteration method of deep learning algorithm model
CN113837178A (en) Deep learning-based automatic positioning and unified segmentation method for meter of transformer substation
CN117290766B (en) Intelligent detection method for high-voltage switch equipment in outdoor environment based on feature selector
CN112036472A (en) Visual image classification method and system for power system
He et al. Multi-State Recognition Method of Substation Switchgear Based on Image Enhancement and Deep Learning
CN115018760B (en) Blood cell morphology auxiliary inspection system and method based on man-machine hybrid enhanced intelligence
CN112329863B (en) Method for identifying state of isolation switch in traction substation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20221125