CN114758288A - Power distribution network engineering safety control detection method and device - Google Patents

Power distribution network engineering safety control detection method and device Download PDF

Info

Publication number
CN114758288A
CN114758288A CN202210255317.4A CN202210255317A CN114758288A CN 114758288 A CN114758288 A CN 114758288A CN 202210255317 A CN202210255317 A CN 202210255317A CN 114758288 A CN114758288 A CN 114758288A
Authority
CN
China
Prior art keywords
module
network
feature
cbam
power distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210255317.4A
Other languages
Chinese (zh)
Inventor
马静
王庆杰
王栩成
赵文越
孟海磊
董啸
任敬飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202210255317.4A priority Critical patent/CN114758288A/en
Publication of CN114758288A publication Critical patent/CN114758288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Abstract

The invention relates to a power distribution network engineering safety control detection method and device, belongs to the technical field of machine image recognition, and solves the problems of low detection speed, low accuracy and poor applicability of a detection method in the prior art. Acquiring a power distribution network engineering field picture, and preprocessing the picture to obtain a sample set; constructing a safety control detection model based on a YOLOv5 neural network model, wherein an improved CBAM (cone beam amplitude modulation) convolution attention mechanism module is added in a main feature extraction network, and an input feature map is convolved along a horizontal channel and a vertical channel in a space attention module in the improved CBAM to obtain a CBAM output feature map; performing feature fusion on a feature map output by the main feature extraction network by adopting a Bi-FPN network in the neck network; obtaining a trained safety control detection model based on the sample set; and transmitting the power distribution network engineering image acquired in real time into a trained safety control detection model to obtain a target detection result in the image. The rapid detection of the safety problem of the power distribution network engineering site is realized.

Description

Power distribution network engineering safety control detection method and device
Technical Field
The invention relates to the technical field of machine image recognition, in particular to a power distribution network engineering safety control detection method and device.
Background
The power distribution network project has the characteristics of multiple and wide construction sites, high safety risk prevention and control difficulty, high field supervision difficulty and the like. At present, the investment scale of power distribution network engineering is large, the average annual project of provincial companies is more than 3000, and the number of field operation people is tens of thousands. Compared with the above scale, the method has the advantages that the number of engineering management personnel is small, the supervision force is insufficient, the problems of non-standard field operation, more safety accidents, poor engineering completion quality and the like are easily caused, and therefore, the safety and quality control of the power distribution network engineering are urgently needed to be enhanced through artificial intelligent means such as machine vision enhancement.
The deep learning is widely applied to the field of image recognition due to the advantages of strong capability of extracting features, high recognition accuracy, high real-time performance and the like. At present, the target detection algorithm based on machine vision mainly comprises the following steps: conventional detection techniques based on input image features and edge detection and detection techniques based on deep learning.
In the traditional detection technology, a target candidate area is selected, image sample characteristics such as HOG characteristics, SVM characteristics and the like are extracted, the image characteristics are sent to classifiers such as an iterative algorithm and the like for classification, and results are output, and the traditional detection algorithm follows 4 stages of image preprocessing, target positioning, target segmentation and target identification. The design errors of all stages can influence subsequent results, and the use of a large number of manually extracted features causes poor robustness, and is difficult to be applied to power distribution network engineering detection tasks with complex scenes. The image identification method based on deep learning is based on a convolutional neural network model, the identification rate is more stable, the applicability is wider, but the neural network model is larger, the requirements of embedded equipment in a power distribution network engineering field are difficult to meet, the identification speed is poor, and the real-time detection requirements cannot be met.
Disclosure of Invention
In view of the above analysis, the embodiments of the present invention aim to provide a method and an apparatus for detecting power distribution network engineering safety management and control, so as to solve the problems that the existing detection method is slow in detection speed, low in accuracy, poor in applicability, and difficult to meet the actual use requirements of the power distribution network engineering field.
On one hand, the embodiment of the invention provides a power distribution network engineering safety control detection method, which comprises the following steps:
acquiring a power distribution network engineering site picture, and preprocessing the picture to obtain a sample set;
constructing a safety control detection model based on a YOLOv5 neural network model, wherein an improved CBAM (cone beam amplitude modulation) convolution attention mechanism module is added in a main feature extraction network, and an input feature map is convolved along a horizontal channel and a vertical channel in a space attention module in the improved CBAM to obtain a CBAM output feature map; performing feature fusion on a feature map output by the main feature extraction network by adopting a Bi-FPN network in the neck network;
based on the sample set, obtaining a trained safety control detection model;
and transmitting the power distribution network engineering image acquired in real time into a trained safety control detection model to obtain a target detection result in the image.
Based on the further improvement of the method, the improved CBAM convolution attention mechanism module comprises a channel attention module and a space attention module; and the feature map output by the channel attention module is used as an input feature map of the space attention module, and the input feature map is convolved twice along the horizontal channel and the vertical channel in the space attention module, and finally the CBAM output feature map is obtained through weighting.
Based on the further improvement of the method, the spatial attention module performs convolution twice on the input feature map along the horizontal channel and the vertical channel, and finally obtains the CBAM output feature map by weighting, wherein the convolution comprises the following steps:
encoding each channel in the horizontal direction and the vertical direction by using an average pooling kernel for the input feature map to obtain two first feature maps in the horizontal direction and the vertical direction;
respectively convolving the two first feature maps to obtain two middle feature maps in the horizontal direction and the vertical direction, and then obtaining middle feature weights in the horizontal direction and the vertical direction through a nonlinear activation function;
after the two intermediate characteristic weights are convolved again respectively, channel weights in the horizontal direction and the vertical direction are obtained through an activation function;
and multiplying the input characteristic diagram by the channel weights in the horizontal direction and the vertical direction respectively to obtain a CBAM output characteristic diagram.
Based on the further improvement of the method, the main feature extraction network is based on a CSPDarknet53 network, and a C3 module is adopted to replace an original CSP module; the C3 module divides the input feature map into two branches for processing, wherein one branch is firstly convolved through a first convolution block CBS and then is propagated with gradients through a plurality of residual error units; the other branch is directly convolved by a second convolution block CBS, and then the two branches are spliced and convolved by a third convolution block CBS to obtain a C3 output characteristic diagram;
The logical operations in the first, second and third volume blocks CBS are the same, and include in sequence: convolution operation Conv, batch normalization of BN and Silu activation functions.
Based on further improvement of the method, an improved CBAM convolution attention mechanism module is added in the trunk feature extraction network, after a third volume block CBS is formed in each C3 module, the improved CBAM convolution attention mechanism module is added to serve as a C3 'module, and a CBAM output characteristic map obtained after a C3 output characteristic map of a C3 module passes through the CBAM convolution attention mechanism module serves as a C3' output characteristic map.
Based on further improvement of the method, the trunk feature extraction network sequentially processes the input image through a Focus module, a fourth volume block CBS, a first C3 ' module, a fifth volume block CBS, a second C3 ' module, a sixth volume block CBS, a third C3 ' module, a seventh volume block CBS and an SPP module, extracts the features of the input image, and outputs 5 feature maps with different scales, and the method comprises the following steps:
feature map P processed by Focus module1Feature map P processed by the first, second and third C3' modules, respectively2、P3And P4And, the feature map P processed by the SPP module5
Based on the further improvement of the method, the Bi-FPN network is adopted to perform feature fusion on the feature graph output by the main feature extraction network, and 5 feature graphs with different scales are subjected to feature fusion of three network levels by utilizing the Bi-FPN network to obtain 3 feature graphs with different sizes.
Based on the further improvement of the method, a trained safety control detection model is obtained based on a sample set, and the method comprises the following steps:
training a safety control detection model based on a sample set, and eliminating redundant target frames through a non-maximum suppression algorithm to obtain a prediction frame of a sample image;
and calculating errors between the prediction frame and the target frame by using the loss function, updating model parameters based on the errors, and repeating training until the value of the loss function is smaller than a threshold value to obtain a trained safety control detection model.
Based on the further improvement of the method, the loss function takes the width and the height of a prediction box as penalty terms on the basis of the CIoU loss function, and the calculation formula is as follows:
Figure BDA0003548436070000041
Figure BDA0003548436070000042
Figure BDA0003548436070000043
wherein IoU is the intersection ratio of the prediction frame and the standard frame, d is the distance between the center points of the prediction frame and the target frame, c is the diagonal distance of the minimum rectangle which can cover the prediction frame and the target frame simultaneously, alpha is a weight function, and v is used for measuring the consistency of the length-width ratio between the target frame and the prediction frame; omega is the width of the target frame, omegapTo predict the frame width, ω' is the width of the smallest rectangle that can cover both the predicted frame and the target frame, h is the target frame height, hpTo predict the frame height, h' is the height of the smallest rectangle that can cover both the prediction frame and the target frame.
On the other hand, an embodiment of the present invention provides a power distribution network engineering safety control detection apparatus, including: the image acquisition and pretreatment module is used for acquiring the power distribution network engineering site pictures and obtaining a sample set after pretreatment;
the system comprises a model generation module, a detection module and a detection module, wherein the model generation module is used for constructing a safety control detection model based on a YOLOv5 neural network model, a CBAM convolution attention mechanism module is added in a trunk feature extraction network, and a space attention module in the CBAM convolves an input feature map along a horizontal channel and a vertical channel to obtain a CBAM output feature map; performing feature fusion on a feature map output by the main feature extraction network by adopting a Bi-FPN network in the neck network;
the model training module is used for obtaining a trained safety control detection model based on the sample set;
engineering scene image detection module for the distribution network engineering image of gathering in real time spreads into the safety control detection model that trains well into, obtains target detection result in the image, includes: an object bounding box and an object class.
Compared with the prior art, the invention can realize at least one of the following beneficial effects:
1. the method is improved aiming at the existing trunk extraction network CSPDarknet53 of YOLOv5, and an improved CBAM structure with an attention mechanism is added, so that more parts of an object to be identified are covered by characteristics, the probability of finally distinguishing the object is increased, and the detection capability of the network is improved under the condition of not increasing other expenses.
2. The structure of the existing YOLOv5 feature extraction network is improved, the improved Bi-FPN is used for replacing PANET, stacking of different features is added in the feature extraction process, fusion of image features is enhanced, and accuracy of network identification is improved.
3. The method is improved aiming at the existing CIoU loss function, the problems that the height and width gradients of a prediction frame in the CIoU loss function are opposite, and the additional penalty term is 0 when the height and width ratio of the prediction frame is equal to that of a target frame are solved, the width and height of the prediction frame are directly used as the penalty terms, and the convergence speed is improved.
4. Aiming at the safety problems existing in the power distribution network engineering field, an improved neural network model is provided, the real-time requirement of field use is met, the safety problems are accurately and quickly detected, the safety problems such as wearing of safety helmets, disconnection of drop-out fuses, plugging of equipment bottom plates and the like are identified, the problems that in the existing power distribution network field engineering management, fewer engineering management personnel are needed, the supervision force is not enough and the like are solved, and the reliability of the power distribution network engineering field safety management and control is improved.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
Fig. 1 is a flowchart of a power distribution network engineering safety control detection method in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of an improved CBAM structure in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of a modified CSPDarknet53 in accordance with embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of improved Bi-FPN network feature fusion in embodiment 1 of the present invention;
FIG. 5 is a graph showing the variation of the loss function, precision, recall and mean precision during the training process in example 3 of the present invention;
FIG. 6 is a P-R graph in example 3 of the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
In the case of the example 1, the following examples are given,
the invention discloses a power distribution network engineering safety control detection method, which comprises the following steps as shown in figure 1:
s11: acquiring a power distribution network engineering field picture, and preprocessing the picture to obtain a sample set;
It should be noted that, for collected power distribution network engineering site pictures, the pictures are classified according to different types of safety, quality and process problem categories, and the samples are turned, rotated and added with noise by using an image processing technology to expand the number of sample sets; and labeling the objects in the sample by using labelimg, sorting the objects into a txt format, and putting the objects into a sample set folder.
S12: constructing a safety control detection model based on a YOLOv5 neural network model, wherein an improved CBAM (cone beam amplitude modulation) convolution attention mechanism module is added in a main feature extraction network, and an input feature map is convolved along a horizontal channel and a vertical channel in a space attention module in the improved CBAM to obtain a CBAM output feature map; performing feature fusion on a feature map output by the main feature extraction network by adopting a Bi-FPN network in the neck network;
it should be noted that the YOLOv5 neural network model is divided into three parts, the first part is a trunk feature extraction network for extracting features of an input image, the second part is a neck network for enhancing the features extracted by the trunk feature extraction network, and the third part is a head network for predicting according to the enhanced features to obtain a detection result.
S121: and constructing an improved main feature extraction network based on a YOLOv5 neural network model.
Specifically, the main feature extraction network is based on a CSPDarknet53 network, and a C3 module is adopted to replace an original CSP module; the C3 module divides the input feature map into two branches for processing, wherein one branch is firstly convolved through a first convolution block CBS and then is propagated with gradients through a plurality of residual error units; and the other branch is directly convolved by a second convolution block CBS, and then the two branches are spliced and convolved by a third convolution block CBS to obtain a C3 output characteristic diagram. The logical operations in the first, second and third volume blocks CBS are the same, and include in sequence: convolution operation Conv, batch normalization of BN and Silu activation functions.
Compared with the CSP module, the C3 module reduces 1 х 1 convolution operation, simultaneously removes a BN layer and an activation function, simplifies the network structure, reduces the calculated amount, reduces the inference time of the model and simultaneously ensures that the performance of the model is not reduced.
Further, a modified CBAM convolution attention mechanism module is added to the C3 module, as shown in fig. 2. The CBAM (conditional Block attention module) convolution attention mechanism module comprises a channel attention module and a space attention module, attention weights can be calculated from two different dimensions of a channel and a space in sequence, then the obtained attention weights are multiplied by an input feature map, and therefore adaptive feature refinement is carried out.
The channel attention module can learn to obtain the weights of different channels so as to generate the attention of a channel domain, the weight of an original channel in the characteristic diagram is changed through the channel attention module, and more obvious performance improvement is obtained under the condition that a small amount of calculation is added. In the channel attention module shown in fig. 2, the input feature map X has dimensions of C × H × W, where C is the number of channels, H, W is the height and width of the feature map, and two C × 1 × 1 weight vectors are obtained through global maximum pooling and global average pooling, and are sent to a two-layer neural network MLP, and are added and activated by a Sigmoid function to obtain channel weights, and the obtained channel weights are multiplied by the original feature map X according to the channels to obtain a feature map X' output by the channel attention module.
The spatial attention module may highlight the information-rich portion of the signature, complementary to the channel attention. In the prior art, the YOLOv5 neural network model easily loses the feature information of the small target when performing convolution sampling, and the detection effect on the small target is not ideal.
In an improved CBAM space attention module, a feature map X 'output by a channel attention module is used as an input feature map of the space attention module, the input feature map X' is convolved twice along a horizontal channel and a vertical channel, the first convolution reduces the dimension of a vector, the extraction of effective information is improved, and the calculated amount is reduced. And (5) performing convolution again, converting the channel number of the intermediate feature vector back to the channel number which is the same as the input feature X', activating the Sigmoid to obtain a final weight, and finally weighting to obtain a CBAM output feature map. Specifically, the method comprises the following processes:
coding each channel along the horizontal direction and the vertical direction by using average pooling kernels with the sizes of (H,1) and (1, W) respectively to obtain two first characteristic graphs in the horizontal direction and the vertical direction, wherein the formula is as follows:
Figure BDA0003548436070000091
Figure BDA0003548436070000092
wherein z isw(w) is a first feature pattern in the horizontal direction, zh(h) In the first feature map in the vertical direction, X ' (i, w) and X ' (h, j) are pixels on the feature map X '.
Convolving the two first feature maps respectively to obtain two middle feature maps in the horizontal direction and the vertical direction, and then obtaining middle feature weights in the horizontal direction and the vertical direction through a nonlinear activation function, wherein the formula is as follows:
fw=δ[Fw1(zw(w))]Formula (3)
fh=δ[Fh1(zh(h))]Formula (4)
Where δ is the nonlinear activation function, Fw1、Fh1Convolution of two intermediate feature maps in horizontal and vertical directions, respectively, fw、fhIntermediate feature weights in the horizontal and vertical directions, respectively, with dimensions of C/rw×W、C/rh×H,rw、rhWhich are dimension reduction coefficients of the number of channels in the horizontal direction and the vertical direction respectively.
③ separately weighting two intermediate features fwAnd fhAfter convolution is carried out again, the Sigmoid function is activated to obtain channel weights in the horizontal direction and the vertical direction, and the formula is as follows:
gw=σ(Fw2(fw) Equation (5)
gh=σ(Fh2(fh) Equation (6)
Wherein σ is Sigmoid function, Fw2、Fh2Convolution of the intermediate feature weights in the horizontal and vertical directions, g, respectivelyw、ghChannel weights for the horizontal and vertical directions, respectively.
Multiplying the input characteristic diagram by the channel weights in the horizontal direction and the vertical direction respectively to obtain a CBAM output characteristic diagram Y, wherein the formula is as follows:
Y(i,j)=X'(i,j)×gh(i)×gw(j) formula (7)
The improved CBAM decomposes and convolves the features along the height direction and the width direction respectively to generate weights by modifying a space attention module in the CBAM, solves the problem of position information loss caused by two-dimensional global pooling in the CBAM space attention module, generates features sensitive to both channels and coordinate positions, and improves the detection accuracy.
After the improved CBAM convolution attention mechanism module is built, after a third convolution block CBS in each C3 module, the improved CBAM convolution attention mechanism module is added to serve as a C3 'module, and after a C3 output characteristic map of the C3 module passes through the CBAM convolution attention mechanism module, an obtained CBAM output characteristic map serves as a C3' output characteristic map.
The resulting improved CSPDarknet53 network is shown in fig. 3, where C3_1_ n ' represents an improved C3 ' module with n residual units and CBAM ' represents an improved CBAM convolution attention mechanism module.
In fig. 3, the trunk feature extraction network sequentially processes an input image through a Focus module, a fourth volume block CBS, a first C3 ' module, a fifth volume block CBS, a second C3 ' module, a sixth volume block CBS, a third C3 ' module, a seventh volume block CBS, and an SPP module, extracts features of the input image, and outputs 5 feature maps with different scales, including:
feature map P processed by Focus module1Feature map P processed by the first, second and third C3' modules, respectively2、P3And P4And, the feature map P processed by the SPP module5
Illustratively, the 5 different scales are: [ 19X 19], [ 38X 38], [ 76X 76], [ 152X 152], and [ 304X 304 ].
Compared with the prior art, the embodiment improves the existing trunk extraction network CSPDarknet53 of YOLOv5, adds an improved CBAM structure with an attention mechanism, enables features to cover more parts of an object to be identified, increases the probability of finally distinguishing the object, and improves the detection capability of the network without increasing other expenses.
S122: and performing feature fusion on the feature graph output by the main feature extraction network by adopting an improved Bi-FPN network in the neck network.
Specifically, the feature maps of 5 different scales obtained in step S121 are subjected to feature fusion of three network levels by using a Bi-FPN network, so as to obtain feature maps of 3 different sizes, as shown in fig. 4.
1) In the first network layer and the second network layer of the Bi-FPN network, k represents the network layer, and k is 1,2, Pt=Pt 1in,Pt 1out=Pt 2inT is 1,2,3,4, 5; the following feature fusions were performed:
is to P5 kinPerform upsampling with P4 kinStacking to obtain intermediate feature P4 ktdThe formula is as follows:
Figure BDA0003548436070000111
wherein conv () represents convolution operation, revise () represents feature map resolution matching operation after up/down sampling, ωk4Weight, ω, for merging level 4 input features into level 4 intermediate features in the kth network level k5For the weight occupied by the merging of the 5 th-level input features into the 4 th-level intermediate features in the kth network level, preferably, to avoid instability of the values, ∈ is taken to be 0.00001.
② to Pt+1 ktdPerform upsampling with Pt kinStacking to obtain an intermediate feature Pt ktdThe formula is as follows:
Figure BDA0003548436070000112
wherein, ω isktFor the weight, omega, occupied by the fusion of the input features of the t-th level to the intermediate features of the t-th level in the k-th network levelk(t+1)For fusing t +1 level intermediate features in the kth network level to the kth network levelthe t-level intermediate features take weight.
P pair1 kinDown-sampling is performed with P2 ktdStacking to obtain output characteristic P1 koutThe formula is as follows:
Figure BDA0003548436070000121
wherein, ω isk1' weight, ω, for the fusion of the input features of level 1 to the output features of level 1 in the k network levelk2' is the weight of the fusion of the 2 nd intermediate feature to the 1 st output feature in the k network level.
Fourthly, to Pt-1 koutDown-sampling is performed with Pt kin、Pt ktdStacking to obtain output characteristic Pt koutThe formula is as follows:
Figure BDA0003548436070000122
wherein, ω iskt' the weight, omega, of the fusion of the input features of the t-th level to the output features of the t-th level in the k-th network levelkt"is the weight, omega, occupied by the fusion of the t-th level intermediate feature to the t-th level output feature in the k-th network levelk(t-1)' is the weight of the t-1 level output characteristic fused to the t level output characteristic in the k network level.
P for P4 koutDown-sampling is performed with P5 kinStacking to obtain output characteristic P5 koutThe formula is as follows:
Figure BDA0003548436070000123
wherein, ω isk5' weight of the fusion of the 5 th input feature to the 5 th output feature in the k network level, omegak4' is the fusion of the 4 th output feature in the k network hierarchy to theThe 5-level output characteristics take weight.
2) In the third network level of the Bi-FPN network, Pt 2out=Pt 3inAnd t is 1,2,3,4,5, the following feature fusion is performed:
is to P5 3inPerform upsampling with P4 3inStacking to obtain intermediate feature P4 3tdThe formula is as follows:
Figure BDA0003548436070000131
wherein, ω is34Weight, ω, for merging level 4 input features into level 4 intermediate features in level 3 network hierarchy35The weight of the 5 th input feature fused to the 4 th intermediate feature in the 3 rd network level is taken.
2 to Pt+1 3tdPerform upsampling with Pt 3inStacking to obtain intermediate feature Pt 3tdThe formula is as follows:
Figure BDA0003548436070000132
wherein, ω is3tFor the weight, omega, occupied by the fusion of the input features of the t-th level to the intermediate features of the t-th level in the 3 rd network level3(t+1)The weight of the t +1 th level intermediate feature fused to the t-th level intermediate feature in the 3 rd network layer is occupied.
P pair1 3inDown-sampling is performed with P2 3td、P2 3inStacking to obtain output characteristic P2 outThe formula is as follows:
Figure BDA0003548436070000133
wherein, ω is31' weight, ω, for the fusion of level 1 input features to level 1 output features in level 3 network hierarchy 32"is the weight, ω, occupied by the fusion of the 2 nd intermediate feature to the 2 nd output feature in the 3 rd network level32' is the weight taken by the fusion of the level 2 input features to the level 2 output features in the level 3 network hierarchy.
Fourthly to Pt-1 3outDown-sampling is performed with Pt 3in、Pt 3tdStacking to obtain output characteristic Pt outThe formula is as follows:
Figure BDA0003548436070000134
wherein, ω is3t' weight of the t-th input feature fused to the t-th output feature in the 3 rd network level, omega3t"is the weight, omega, occupied by the fusion of the t-th level intermediate feature to the t-th level output feature in the 3 rd network level3(t-1)' is the weight of the t-1 level output characteristic fused to the t level output characteristic in the 3 rd network level.
Illustratively, the 3 different sizes are: [ 38X 38], [ 76X 76], and [ 152X 152 ].
Compared with the prior art, the embodiment adopts the improved Bi-FPN to replace the PANET network in the YOLOv5 model, the Bi-FPN uses the fast normalization fusion, the weight is divided by all weights to be added for normalization, and meanwhile, the weights are normalized to be between [0,1], so that the calculation speed is improved. The improved Bi-FPN network deletes a node with only one input, the node has no feature fusion and small contribution degree, and the network is simplified after the node is deleted; an edge is added between an original input node and an output node, stacking of different features is added under the condition that more cost is not consumed, fusion of image features is enhanced, and accuracy of network identification is improved. Compared with the existing three-scale feature fusion network, the improved Bi-FPN fuses five-scale features extracted from the main network at the same time, so that fusion of large-scale features is increased, and the detection effect of a large-scale target is improved.
S123: inputting the feature maps of 3 different sizes output in the improved Bi-FPN feature fusion network into a prediction layer of a head network, eliminating redundant target frames through a non-maximum suppression algorithm, and obtaining target prediction frames and target types in an original image.
Specifically, the prediction results of the prediction layer are subjected to score sorting, redundant target frames are eliminated by adopting a DIoU-NMS non-maximum suppression algorithm, the algorithm considers the intersection ratio and the distance between the prediction frame and the center point of the target frame at the same time, the detection of similar targets is promoted, and the algorithm formula is as follows:
Figure BDA0003548436070000141
where IoU is the intersection ratio of the predicted frame and the target frame, d is the distance between the predicted frame and the center point of the target frame, and c is the diagonal distance of the smallest rectangle that can cover both the predicted frame and the target frame.
S13: based on the sample set, obtaining a trained safety control detection model;
it should be noted that, when the constructed safety control detection model is trained, the loss function is used to calculate the error between the prediction frame and the target frame, the model parameters are updated based on the error, and the training is repeated until the value of the loss function is smaller than the threshold value, so that the trained safety control detection model is obtained, and the network achieves the optimal performance.
It should be noted that, when the heights and widths of the prediction boxes are opposite and the height and width ratio of the prediction box is equal to that of the target box, the additional penalty term is 0 due to the use of the existing CIoU loss function, and therefore, the loss function of the present embodiment directly takes the widths and heights of the prediction boxes as penalty terms on the basis of the CIoU loss function, thereby increasing the convergence speed. The calculation formula is:
Figure BDA0003548436070000151
Figure BDA0003548436070000152
Figure BDA0003548436070000153
wherein IoU is the intersection ratio of the prediction frame and the standard frame, d is the distance between the center points of the prediction frame and the target frame, c is the diagonal distance of the minimum rectangle which can cover the prediction frame and the target frame simultaneously, alpha is a weight function, and v is used for measuring the consistency of the length-width ratio between the target frame and the prediction frame; omega is the width of the target frame, omegapTo predict the frame width, ω' is the width of the smallest rectangle that can cover both the predicted frame and the target frame, h is the target frame height, hpTo predict the frame height, h' is the height of the smallest rectangle that can cover both the prediction frame and the target frame.
It should be noted that, during training, whether training is completed or not may also be identified according to the maximum training number.
S14: and transmitting the power distribution network engineering image acquired in real time into a trained safety control detection model to obtain a target detection result in the image.
It should be noted that the target detection result includes: and when the target boundary box and the target category are implemented, the boundary box with the highest confidence coefficient of the target category is selected as the target boundary box, and preferably, the target category is distinguished by boundary boxes with different colors.
Compared with the prior art, the power distribution network engineering safety control detection method provided by the embodiment improves the existing trunk extraction network CSPDarknet53 of YOLOv5, adds an improved CBAM structure with an attention mechanism, enables characteristics to cover more parts of an object to be identified, increases the probability of finally distinguishing the object, and improves the detection capability of the network under the condition of not increasing other expenses; the improved Bi-FPN is used for replacing PANET, stacking of different features is added in the feature extraction process, fusion of image features is enhanced, and accuracy of network identification is improved; the existing CIoU loss function is improved, the problems that the height and width gradients of a prediction frame in the CIoU loss function are opposite, and the penalty term is 0 when the height and width ratio of the prediction frame is equal to that of a target frame are solved, the width and height of the prediction frame are directly used as the penalty terms, and the convergence speed is improved; the safety problem detection method meets the real-time requirement of field use, accurately and quickly completes detection of safety problems, realizes identification of safety problems such as wearing of safety helmets, disconnection of drop-out fuses, plugging of equipment bottom plates and the like, solves the problems of few engineering management personnel, insufficient supervision force allocation and the like in the existing power distribution network field engineering management, and improves reliability of power distribution network engineering field safety management and control.
In the case of the example 2, the following examples are given,
the invention discloses a power distribution network engineering safety control detection device in a specific embodiment 2, so that the safety control detection method in the embodiment 1 is realized. The concrete implementation of each module refers to the corresponding description in embodiment 1. The device comprises:
the image acquisition and preprocessing module is used for acquiring the project site pictures of the power distribution network and acquiring a sample set after preprocessing;
the system comprises a model generation module, a detection module and a detection module, wherein the model generation module is used for constructing a safety control detection model based on a YOLOv5 neural network model, a CBAM convolution attention mechanism module is added in a trunk feature extraction network, and a space attention module in the CBAM convolves an input feature map along a horizontal channel and a vertical channel to obtain a CBAM output feature map; performing feature fusion on a feature map output by the main feature extraction network by adopting a Bi-FPN network in the neck network;
the model training module is used for obtaining a trained safety control detection model based on the sample set;
and the engineering field image detection module is used for transmitting the power distribution network engineering image acquired in real time into the trained safety control detection model to obtain a target detection result in the image.
It should be noted that, when a safety control detection model is constructed and constructed in a model generation module, based on a Yolov5 neural network model, an improved CSPDarknet53 is adopted in a trunk feature extraction network to extract features of an input image, wherein C3 modules are adopted to replace original CSP modules, and after a last volume block CBS in each C3 module, an improved CBAM convolution attention mechanism module is added; the features extracted by the main feature extraction network are enhanced by adopting an improved Bi-FPN network in the neck network; using the DIoU-NMS as a non-maximum suppression algorithm to eliminate redundant target boxes in the head network; and during training, the improved CIoU is used as a loss function, the width and the height of a prediction box are used as penalty terms, and the convergence speed is improved.
The improved CBAM convolution attention mechanism module comprises a channel attention module and a space attention module; the feature map output by the channel attention module is used as an input feature map of the spatial attention module, and the spatial attention module performs convolution operation on the input feature map along two horizontal and vertical channels to obtain a CBAM output feature map, wherein the CBAM output feature map comprises:
the input feature map has dimensions of C multiplied by H multiplied by W, wherein C is a channel, H is height, and W is width, and each channel is coded along the horizontal direction and the vertical direction respectively by using average pooling kernels with the sizes of (H,1) and (1, W), so that two first feature maps in the horizontal direction and the vertical direction are obtained;
respectively convolving the two first feature maps to obtain two middle feature maps in the horizontal direction and the vertical direction, and then obtaining middle feature weights in the horizontal direction and the vertical direction through a nonlinear activation function;
after the two intermediate characteristic weights are convolved again respectively, channel weights in the horizontal direction and the vertical direction are obtained through Sigmoid function activation;
and multiplying the input characteristic diagram by the channel weights in the horizontal direction and the vertical direction respectively to obtain a CBAM output characteristic diagram.
It should be noted that, since the relevant parts of the power distribution network engineering safety control detection device and the power distribution network engineering safety control detection method in this embodiment can be referred to each other, and the description is repeated here, so that the details are not repeated here. The principle of the embodiment of the device is the same as that of the embodiment of the method, so the device also has the corresponding technical effect of the embodiment of the method.
In the case of the embodiment 3, the following examples,
in a specific embodiment 3 of the present invention, based on the safety management and control detection method in embodiment 1 of the present invention, the safety management and control detection apparatus in embodiment 2 is built, and the operation parameters in the training process are set as follows: the Batch size is 32, the initial learning rate is set to 0.001, and the total number of iterations is 500.
In consideration of the applicability of the invention, the safety problems of whether the safety helmet is worn correctly, whether the drop-out fuse is disconnected and whether the equipment bottom plate blocks the engineering site are selected respectively in the embodiment.
In this embodiment, an ubuntu18.04 operating system is used, a tensrflow architecture is selected to construct a security control detection model, and a display card of GeForce RTX 2070 is used for operation. The specific experimental configuration is shown in table 1.
Table 1 experimental environment configuration
Parameter(s) Configuration of
CPU Intel(R)Core(TM)i5-10400F CPU@2.90GHz
GPU GeForce RTX 2070
Language(s) Python 3.8
Memory device 16G
Accelerated environment CUDA10.1
System environment Ubuntu18.04
In the model training process, the performance of the model is evaluated by using average Precision average (mAP), Recall (Recall), improved loss function (CIoU _ loss) and Precision (Precision). As can be seen from FIG. 5, the Recall (Recall) stabilized at about 0.93 after the model iterated to 100 times; the Precision ratio (Precision) is greater than 0.9 after the model is iterated to 80 times; the mean of average precision (mAP) reached 0.97 when the model iterated to 100 times; the modified loss function (CIoU _ loss) stabilizes at approximately 0.1 when the model is iterated up to 500 times.
When the set thresholds are different, a plurality of sets of Precision-Recall values can be obtained, and a P-R curve can be drawn by taking the Precision and Recall values as coordinates, as shown in FIG. 6.
Optimizing the convolutional neural network model based on a loss function, and continuously iterating through gradient descent and back propagation to update network model parameters, so that the network achieves the optimal performance, and a well-trained safety control detection model is obtained;
taking the detection of whether a safety helmet is worn, whether an equipment bottom plate is blocked and whether a drop-out fuse is disconnected as examples, inputting the field image of the power distribution network engineering to be detected into a trained safety control detection model to obtain a target detection result in the image, the method comprises the following steps: a target bounding box, a target class, and a class confidence.
Illustratively, for whether the safety helmet is worn or not, the target detection result is marked by a green border frame to indicate the target which correctly wears the safety helmet, is marked by a red border frame to indicate the target which does not correctly wears the safety helmet, and the category confidence is displayed on the border frame.
In order to measure the actual effect of the algorithm, the target identification condition in the actual test sample is counted, and the detection accuracy is shown in table 2. As can be seen from Table 2, the algorithm has a good recognition result for recognizing the engineering field image.
TABLE 2 accuracy of identification of heterogeneous samples
Kinds of engineering safety Rate of identification accuracy
Whether the safety helmet is worn correctly 98.87%
Whether the equipment bottom plate is blocked 98.20%
Whether the drop-out fuse is disconnected or not 99.52%
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A power distribution network engineering safety control detection method is characterized by comprising the following steps:
acquiring a power distribution network engineering site picture, and preprocessing the picture to obtain a sample set;
constructing a safety control detection model based on a YOLOv5 neural network model, wherein an improved CBAM (cone beam amplitude modulation) convolution attention mechanism module is added in a main feature extraction network, and an input feature map is convolved along a horizontal channel and a vertical channel in a space attention module in the improved CBAM to obtain a CBAM output feature map; performing feature fusion on the feature graph output by the trunk feature extraction network by adopting a Bi-FPN network in a neck network;
Obtaining a trained safety control detection model based on the sample set;
and transmitting the power distribution network engineering image acquired in real time into a trained safety control detection model to obtain a target detection result in the image.
2. The power distribution network engineering safety management and control detection method according to claim 1, wherein the improved CBAM convolution attention mechanism module comprises a channel attention module and a spatial attention module; and the feature map output by the channel attention module is used as an input feature map of the space attention module, and the input feature map is convolved twice along a horizontal channel and a vertical channel in the space attention module, and finally the CBAM output feature map is obtained by weighting.
3. The power distribution network engineering safety management and control detection method according to claim 2, wherein the space attention module performs convolution on the input feature map twice along two horizontal and vertical channels, and finally obtains a CBAM output feature map by weighting, and the method comprises the following steps:
encoding each channel in the horizontal direction and the vertical direction by using an average pooling kernel to the input feature map to obtain two first feature maps in the horizontal direction and the vertical direction;
Respectively convolving the two first feature maps to obtain two middle feature maps in the horizontal direction and the vertical direction, and then obtaining middle feature weights in the horizontal direction and the vertical direction through a nonlinear activation function;
after the two intermediate characteristic weights are convolved again respectively, channel weights in the horizontal direction and the vertical direction are obtained through an activation function;
and multiplying the input characteristic diagram by the channel weights in the horizontal direction and the vertical direction respectively to obtain a CBAM output characteristic diagram.
4. The power distribution network engineering safety management, control and detection method according to claim 3, wherein the trunk feature extraction network is based on a CSPDarknet53 network, and replaces an original CSP module with a C3 module; the C3 module divides the input feature map into two branches for processing, wherein one branch is firstly convolved through a first convolution block CBS and then is propagated with gradients through a plurality of residual error units; the other branch is directly convolved by a second convolution block CBS, and then the two branches are spliced and convolved by a third convolution block CBS to obtain a C3 output characteristic diagram;
the logical operations in the first, second and third volume blocks CBS are the same, and include in sequence: convolution operation Conv, batch normalization BN and Silu activation functions.
5. The power distribution network engineering safety management and control detection method according to claim 4, wherein the improved CBAM convolution attention mechanism module is added to the trunk feature extraction network, after a third convolution block CBS in each C3 module is added, the improved CBAM convolution attention mechanism module is used as a C3 'module, and a CBAM output feature map obtained after a C3 output feature map of a C3 module passes through the CBAM convolution attention mechanism module is used as a C3' output feature map.
6. The power distribution network engineering safety control detection method according to claim 5, wherein the trunk feature extraction network sequentially processes an input image through a Focus module, a fourth volume block CBS, a first C3 ' module, a fifth volume block CBS, a second C3 ' module, a sixth volume block CBS, a third C3 ' module, a seventh volume block CBS and an SPP module, extracts input image features, and outputs 5 feature maps with different scales, and the method comprises the following steps:
feature map P processed by Focus module1Feature map P processed by the first, second and third C3' modules, respectively2、P3And P4And, the feature map P processed by the SPP module5
7. The method according to claim 6, wherein the feature fusion of the feature maps output by the trunk feature extraction network is performed by using a Bi-FPN network, and the feature fusion of the 5 feature maps with different scales is performed by using a Bi-FPN network to perform feature fusion of three network levels, so as to obtain 3 feature maps with different sizes.
8. The power distribution network engineering safety control detection method according to any one of claims 1 to 7, wherein the obtaining of the trained safety control detection model based on the sample set comprises:
training a safety control detection model based on the sample set, and eliminating redundant target frames through a non-maximum suppression algorithm to obtain a prediction frame of a sample image;
and calculating errors between the prediction frame and the target frame by using the loss function, updating model parameters based on the errors, and repeating training until the value of the loss function is smaller than a threshold value to obtain a trained safety control detection model.
9. The power distribution network engineering safety control detection method according to claim 8, wherein the loss function takes the width and height of a prediction frame as penalty terms on the basis of a CIoU loss function, and the calculation formula is as follows:
Figure FDA0003548436060000031
Figure FDA0003548436060000032
Figure FDA0003548436060000033
IoU is a prediction box and a standardD is the distance between the center points of the prediction frame and the target frame, c is the diagonal distance of the minimum rectangle which can simultaneously cover the prediction frame and the target frame, alpha is a weight function, and v is used for measuring the consistency of the length-width ratio between the target frame and the prediction frame; omega is the width of the target frame, omegapTo predict the frame width, ω' is the width of the smallest rectangle that can cover both the predicted frame and the target frame, h is the target frame height, h pTo predict the frame height, h' is the height of the smallest rectangle that can cover both the prediction frame and the target frame.
10. The utility model provides a distribution network engineering safety management and control detection device which characterized in that includes:
the image acquisition and pretreatment module is used for acquiring the power distribution network engineering site pictures and obtaining a sample set after pretreatment;
the system comprises a model generation module, a CBAM (cone-beam amplitude modulation) convolution attention mechanism module and a CBAM (cone-beam amplitude modulation) convolution detection module, wherein the model generation module is used for constructing a safety control detection model based on a YOLOv5 neural network model, the CBAM convolution attention mechanism module is added in a main feature extraction network, and an input feature map is convolved along a horizontal channel and a vertical channel in a space attention module in the CBAM to obtain a CBAM output feature map; performing feature fusion on the feature graph output by the main feature extraction network by adopting a Bi-FPN network in a neck network;
the model training module is used for obtaining a trained safety control detection model based on the sample set;
and the engineering field image detection module is used for transmitting the power distribution network engineering image acquired in real time into the trained safety control detection model to obtain a target detection result in the image.
CN202210255317.4A 2022-03-15 2022-03-15 Power distribution network engineering safety control detection method and device Pending CN114758288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210255317.4A CN114758288A (en) 2022-03-15 2022-03-15 Power distribution network engineering safety control detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210255317.4A CN114758288A (en) 2022-03-15 2022-03-15 Power distribution network engineering safety control detection method and device

Publications (1)

Publication Number Publication Date
CN114758288A true CN114758288A (en) 2022-07-15

Family

ID=82327312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210255317.4A Pending CN114758288A (en) 2022-03-15 2022-03-15 Power distribution network engineering safety control detection method and device

Country Status (1)

Country Link
CN (1) CN114758288A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410012A (en) * 2022-11-02 2022-11-29 中国民航大学 Method and system for detecting infrared small target in night airport clear airspace and application
CN115731533A (en) * 2022-11-29 2023-03-03 淮阴工学院 Vehicle-mounted target detection method based on improved YOLOv5
CN115909070A (en) * 2022-11-25 2023-04-04 南通大学 Improved yolov5 network-based weed detection method
CN116402787A (en) * 2023-04-06 2023-07-07 温州大学智能锁具研究院 Non-contact PCB defect detection method
CN116405310A (en) * 2023-04-28 2023-07-07 北京宏博知微科技有限公司 Network data security monitoring method and system
CN116579616A (en) * 2023-07-10 2023-08-11 武汉纺织大学 Risk identification method based on deep learning
CN117036365A (en) * 2023-10-10 2023-11-10 南京邮电大学 Third molar tooth root number identification method based on deep attention network

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410012A (en) * 2022-11-02 2022-11-29 中国民航大学 Method and system for detecting infrared small target in night airport clear airspace and application
CN115909070A (en) * 2022-11-25 2023-04-04 南通大学 Improved yolov5 network-based weed detection method
CN115909070B (en) * 2022-11-25 2023-10-17 南通大学 Weed detection method based on improved yolov5 network
CN115731533A (en) * 2022-11-29 2023-03-03 淮阴工学院 Vehicle-mounted target detection method based on improved YOLOv5
CN115731533B (en) * 2022-11-29 2024-04-05 淮阴工学院 Vehicle-mounted target detection method based on improved YOLOv5
CN116402787A (en) * 2023-04-06 2023-07-07 温州大学智能锁具研究院 Non-contact PCB defect detection method
CN116402787B (en) * 2023-04-06 2024-04-09 温州大学智能锁具研究院 Non-contact PCB defect detection method
CN116405310A (en) * 2023-04-28 2023-07-07 北京宏博知微科技有限公司 Network data security monitoring method and system
CN116405310B (en) * 2023-04-28 2024-03-15 北京宏博知微科技有限公司 Network data security monitoring method and system
CN116579616A (en) * 2023-07-10 2023-08-11 武汉纺织大学 Risk identification method based on deep learning
CN116579616B (en) * 2023-07-10 2023-09-29 武汉纺织大学 Risk identification method based on deep learning
CN117036365A (en) * 2023-10-10 2023-11-10 南京邮电大学 Third molar tooth root number identification method based on deep attention network

Similar Documents

Publication Publication Date Title
CN109829893B (en) Defect target detection method based on attention mechanism
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN110287960B (en) Method for detecting and identifying curve characters in natural scene image
CN109934115B (en) Face recognition model construction method, face recognition method and electronic equipment
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN103049763B (en) Context-constraint-based target identification method
CN107633226B (en) Human body motion tracking feature processing method
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN107563345A (en) A kind of human body behavior analysis method based on time and space significance region detection
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
Chen et al. Research on recognition of fly species based on improved RetinaNet and CBAM
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN110569782A (en) Target detection method based on deep learning
CN111178208A (en) Pedestrian detection method, device and medium based on deep learning
CN106780727B (en) Vehicle head detection model reconstruction method and device
CN110956158A (en) Pedestrian shielding re-identification method based on teacher and student learning frame
CN116645592B (en) Crack detection method based on image processing and storage medium
CN112861970B (en) Fine-grained image classification method based on feature fusion
Zhou et al. Enhance the recognition ability to occlusions and small objects with Robust Faster R-CNN
CN112036260A (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN113657414B (en) Object identification method
CN110570450B (en) Target tracking method based on cascade context-aware framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination