CN115205518A - Target detection method and system based on YOLO v5s network structure - Google Patents

Target detection method and system based on YOLO v5s network structure Download PDF

Info

Publication number
CN115205518A
CN115205518A CN202210528222.5A CN202210528222A CN115205518A CN 115205518 A CN115205518 A CN 115205518A CN 202210528222 A CN202210528222 A CN 202210528222A CN 115205518 A CN115205518 A CN 115205518A
Authority
CN
China
Prior art keywords
picture
yolo
feature
network structure
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210528222.5A
Other languages
Chinese (zh)
Inventor
张佳帅
李攀
缪华桦
张涌
宁立
许宜诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Shenzhen Technology University
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS, Shenzhen Technology University filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202210528222.5A priority Critical patent/CN115205518A/en
Publication of CN115205518A publication Critical patent/CN115205518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention relates to the field of small target detection, in particular to a target detection method and a system thereof based on a YOLO v5s network structure; according to the method, the mosaic data of the picture is enhanced, the self-adaptive anchor frame calculation and the picture size processing are carried out, the picture is sliced and pooled, then the feature map fusion is carried out, and the bidirectional network of the bidirectional feature pyramid network is adopted for fusion while the original channel dimension splicing method of YOLO v5s is adopted, so that the effect of the detection precision of the small target is improved.

Description

Target detection method and system based on YOLO v5s network structure
Technical Field
The invention relates to the field of small target detection, in particular to a target detection method and a target detection system based on a YOLO v5s network structure.
Background
With the rapid development of the computer field, the important component part of the target detection in the computer vision is also widely researched and explored by various social circles, the target detection technology is widely applied to the fields of medicine, traffic, security, internet and the like, small targets widely exist in large-view-field pictures, long-distance imaging pictures and special-class target pictures, and the images of the small targets have the factors of less information, low information ratio, unobvious characteristics and the like; for these reasons, it is urgent to improve the accuracy and real-time performance of small object detection in the context of wide application of object detection.
Most target detection methods utilize a convolutional neural network to extract features, the convolutional neural network mostly adopts the topmost high-level features, the inherent resolution of small targets is low, and a feature map is continuously reduced after multiple downsampling, so that the loss of detail information of the small targets is serious.
Disclosure of Invention
The invention mainly solves the technical problem of providing a target detection method based on a YOLO v5s network structure, which performs mosaic data enhancement processing, self-adaptive anchor frame calculation and picture size processing on pictures, performs characteristic map fusion after slicing and pooling on the pictures, adopts a bidirectional network of a bidirectional characteristic pyramid network for fusion while adopting an original channel dimension splicing method of the YOLO v5s, and improves the detection precision effect of small targets; a target detection system based on the YOLO v5s network structure is also provided.
In order to solve the technical problems, the invention adopts a technical scheme that: a target detection method based on a YOLO v5s network structure is provided, wherein the method comprises the following steps:
s1, collecting an input picture, performing mosaic data enhancement processing on the picture, performing adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing to scale the anchor frame value to a fixed size;
s2, carrying out slicing operation processing on the picture, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the picture, and splicing the feature maps of the convolution layers with different sizes together;
s3, fusing the features of different resolutions in the picture by adopting a bidirectional feature pyramid network to form multi-scale feature fusion, and merging the multi-scale feature fusion into a YOLO v5S network structure;
and S4, detecting the image with the fused features and then outputting the image.
As an improvement of the present invention, in step S3, the information of the high-resolution P2 feature is introduced into the multi-scale feature fusion.
As a further improvement of the invention, in step S4, CIOU _ Loss is used as a Loss function to detect and distinguish the relative position of the prediction frame in each target frame.
As a further improvement of the present invention, in step S1, fetch batch from the data set in the picture, then randomly fetch 4 pictures from the batch, perform random position clipping and splicing to form a new picture, and perform mosaic data enhancement processing after cycling.
As a further improvement of the invention, in step S1, anchor frames with different initial lengths and widths are set in different data sets, a prediction frame is obtained on the basis of the initial anchor frame during data training, the prediction frame is compared with a real frame, the difference between the prediction frame and the real frame is calculated, network structure parameters are updated in a reverse updating and iterative manner, and an optimal anchor frame value is obtained by performing adaptive anchor frame calculation.
As a further improvement of the present invention, in step S1, the picture size is scaled while training in the YOLO v5S network structure.
As a further improvement of the present invention, in step S2, the 608x608x3 image is sliced to 304x304x12 size, and then 32 convolution kernels are performed to obtain a 304x304x32 feature map.
As a further improvement of the present invention, in step S2, the convolutional layer feature maps are pooled maximally, and convolutional layer feature maps of different sizes are stitched together.
A system for detecting a target based on a YOLO v5s network architecture, comprising:
the input module is used for acquiring an input picture, performing mosaic data enhancement processing on the picture, performing adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing and scaling to a fixed size;
the backbone network module is used for carrying out slicing operation processing on the pictures, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the pictures and splicing the feature maps of the convolution layers with different sizes;
the connection module is used for fusing the features of different resolutions in the picture by adopting a bidirectional feature pyramid network;
and the output module is used for outputting the picture after the features are fused.
The invention has the beneficial effects that: compared with the prior art, the method has the advantages that the mosaic data enhancement processing, the self-adaptive anchor frame calculation and the picture size processing of the picture are firstly carried out, the picture is sliced and pooled, then the feature map fusion is carried out, the original channel dimension splicing method of YOLO v5s is adopted, meanwhile, the bidirectional network of the bidirectional feature pyramid network is adopted for fusion, and the effect of the small target detection precision is improved.
Drawings
FIG. 1 is a block diagram of the steps of the target detection method based on the YOLO v5s network structure of the present invention;
FIG. 2 is a schematic flow chart of step S1 according to the present invention;
FIG. 3 is a schematic diagram of a process flow of enhancing mosaic data according to the present invention;
FIG. 4 is a diagram of the focus structure of the present invention;
fig. 5 is a diagram of a bipfn network architecture of the present invention;
FIG. 6 is a schematic diagram of a feature fusion network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1 to 6, a target detection method based on the YOLO v5s network structure of the present invention includes the following steps:
s1, collecting an input picture, performing mosaic data enhancement processing on the picture, performing adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing to scale the anchor frame value to a fixed size;
s2, carrying out slicing operation processing on the picture, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the picture, and splicing the feature maps of the convolution layers with different sizes together;
s3, fusing the features with different resolutions in the picture by adopting a bidirectional feature pyramid network to form multi-scale feature fusion, and merging the multi-scale feature fusion into a YOLO v5S network structure;
and S4, detecting the image with the fused features, and then outputting.
The method carries out mosaic data enhancement processing, self-adaptive anchor frame calculation and picture size processing on the picture, and carries out pooling after slicing on the picture and then carries out feature map fusion, and adopts a bidirectional network of a bidirectional feature pyramid network for fusion while adopting the original channel dimension splicing method of YOLO v5s, thereby improving the effect of detecting the precision of the small target.
The invention also provides a target detection system based on the YOLO v5s network structure, which comprises the following components:
the input module is used for acquiring an input picture, performing mosaic data enhancement processing on the picture, performing adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing and scaling to a fixed size;
the backbone network module is used for carrying out slicing operation processing on the pictures, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the pictures and splicing the feature maps of the convolution layers with different sizes;
the connection module is used for fusing the features with different resolutions in the picture by adopting a bidirectional feature pyramid network;
and the detection module is used for detecting and then outputting the image fused with the features.
In the step S1, a batch is taken out of a data set in a picture, 4 pictures are taken out randomly from the data set, random position cutting and splicing are carried out to form a new picture, mosaic data enhancement processing is carried out after circulation, anchor frames with different initial lengths and widths are set in different data sets, during data training, a prediction frame is obtained on the basis of the initial anchor frame, the prediction frame is compared with a real frame, the difference between the prediction frame and the real frame is calculated, network structure parameters are updated in a reverse updating and iterative mode, and an optimal anchor frame value is obtained through self-adaptive anchor frame calculation; scaling the picture size while training within the YOLO v5s network structure; specifically, the mosaic data enhancement method is to splice 4 pictures in a random scaling, random cutting and random arrangement mode, so that a detection data set is greatly enriched, and especially, a plurality of small targets are added to ensure that the robustness of a network structure is better; in the YOLOv5 series algorithm, anchor frames with different initial lengths and widths are set in different data sets, a prediction frame is obtained on the basis of the initial anchor frame in the process of training data, the prediction frame is compared with a real frame, the difference between the prediction frame and the real frame is calculated, network structure parameters are updated in a reverse updating and iterative mode, and the optimal anchor frame value can be obtained through self-adaptive anchor frame calculation. In the target detection algorithm, the sizes of the pictures to be trained are different, the original pictures need to be uniformly scaled to a fixed size and then input into the network for training, the size of the picture input by the YOLO v5s model used in the invention is 608x608x3, the picture can pass through a Focus structure specific to a YOLO v5 series before entering the backbone network, the information loss is reduced in the down-sampling process, the requirement on hardware equipment is low, and the calculation cost is low.
In step S2, the 608x608x3 image is sliced to 304x304x12 size, and then 32 convolution kernels are performed to obtain a 304x304x32 feature map; performing maximum pooling on the convolutional layer characteristic diagrams, and splicing the convolutional layer characteristic diagrams with different sizes together; specifically, the backbone network module includes Focus, CSPs (cross-stage local networks) and SPPs (spatial vector pyramid pooling), the Focus does not exist in other YOLO network structures, and is mainly used for performing slicing operations, for example, an 608x608x3 image is changed into a size of 304x304x12 after being subjected to slicing operations in the Focus structure, and then is subjected to convolution operations of 32 convolution kernels to form a feature map of 304x304x32, so that the amount of calculation is reduced, and the real-time performance of the model is improved; the CSPNet enhances the learning capability of the CNN, solves the problem of repeated gradient information of network optimization in other large convolutional neural network frameworks backhaul, integrates the change of the gradient into a characteristic diagram from beginning to end, can keep the accuracy while reducing the weight, reduces the calculation bottleneck and reduces the memory cost. By integrating the change of the gradient into the characteristic diagram from beginning to end, the accuracy is ensured while the calculated amount is reduced; SPP (space vector pyramid pooling) is adopted in the backhaul, the feature maps are subjected to maximum pooling, the feature maps with different scales are spliced together, and the Backbone network improved by the CSPNet can improve the detection performance, enhance the learning capability of the CNN, reduce the calculated amount and improve the reasoning speed.
In the step S3, fusing the features of different resolutions in the picture by adopting a bidirectional feature pyramid network to form multi-scale feature fusion, merging the multi-scale feature fusion into a YOLO v5S network structure, and introducing the information of the P2 feature with high resolution into the multi-scale feature fusion; specifically, in the connection module, the Neck is positioned between the Backbone and the Head and plays a role in connecting and extracting fusion features, a bidirectional feature pyramid network (BiFPN) is effective bidirectional cross-scale connection and weighted feature fusion, when features with different resolutions are fused, an additional weight is added for each input, and the network learns the importance degree of each input feature; compared with the prior target detection algorithm which treats the characteristics of different scales equally, the method has obvious advantages; for the used weight, compared with a fusion method based on Softmax, a fusion method based on rapid normalization has the advantages that the learning capacity and the accuracy are comparable, but the running speed of the fusion method on a GPU is 30% faster than that of the fusion method based on Softmax, and finally BiFPN integrates the cross-scale connection and the fusion method based on rapid normalization.
The method is improved on the basis of YOLO v5s, the BiFPN multi-scale feature fusion and the YOLO v5s target detection framework are combined into a whole, meanwhile, the receptive field of a model is expanded by fully utilizing high-resolution low-level features, namely, a bidirectional network is utilized while the original channel dimension splicing method of the YOLO v5s is adopted on the problem of model feature fusion; on the basis, in order to improve the effect of the precision of the detection of the small target, the feature C2 of a lower hierarchy than C3 is utilized, and the information of the P2 feature of high resolution is introduced into feature fusion.
In step S4, detecting and distinguishing the relative position of a prediction frame in each target frame by adopting CIOU _ Loss as a Loss function; specifically, the detection module comprises a bounding box Loss function and NMS (non-maximum suppression), and the invention uses CIOU _ Loss instead of GIOU _ Loss as the Loss function, thereby solving the problem that the relative position of the prediction frame in each target frame can not be distinguished by the GIOU _ Loss; simultaneously, the NMS is changed to DIOU _ NMS, so that when a plurality of prediction boxes exist at last, the effect of helping to select an optimal prediction box is better; the penalty term of CloU is added with an influence factor alpha v on the basis of the penalty term of DloU, and the factor takes the aspect ratio of the prediction frame to the aspect ratio of the target frame into consideration.
Figure BDA0003645464230000081
Wherein α is a parameter for making trade-off.
Figure BDA0003645464230000082
v is a parameter used to measure aspect ratio uniformity, defined as:
Figure BDA0003645464230000083
the definition of the Clou Loss function is:
Figure BDA0003645464230000084
Figure BDA0003645464230000085
where RDIoU is the distance between the center points of the two boxes, and is expressed by the following formula:
Figure BDA0003645464230000091
the classification loss function in the training stage uses a local loss function (a GFL-focus loss function can be used, then, gold labeled sample data is used for training, and after the training of the training model is stable, the training model is stored every fixed iteration number), so that the convergence of the model can be accelerated, and the instantaneity is improved.
Due to target detection in computer vision, the number of positive and negative samples is extremely unbalanced, weights can be given according to the difficulty level, the difficult samples are given with larger weights, and the class balance problem of p and n is changed into the class balance problem of hard p and hard n. And the Hard p and the Hard n are further balanced with cross entropy combination parameters, and the Focal distance is favorable for improving the convergence speed of the model, so that the real-time performance of the model is better, and the effective balance is carried out between the identification speed and the accuracy.
In the invention, the detection capability of the multi-scale target is an important standard for measuring the performance of a target detection algorithm, and a bidirectional network is utilized while the original channel dimension splicing method of YOLO v5s is adopted on the problem of model feature fusion; on the basis, in order to improve the effect of the detection precision of the small target, the feature C2 with a level lower than that of C3 is utilized, and the information of the P2 feature with high resolution is introduced into feature fusion, so that the detection precision is improved while the reasoning speed is basically maintained, and particularly, the detection effect of the medium and small targets is better improved.
According to the method, channel information is reserved to a great extent in the feature transmission process through self-adaptive feature fusion and receptive field enhancement, different receptive fields in each feature map are learned in a self-adaptive mode, the representation of a feature pyramid is enhanced, and the accuracy of multi-scale target identification is effectively improved; the small target detection based on the YOLO v5s has the advantages of high accuracy, high speed, high real-time performance and high performance.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A target detection method based on a YOLO v5s network structure is characterized by comprising the following steps:
s1, collecting an input picture, performing mosaic data enhancement processing on the picture, performing adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing to scale the picture to a fixed size;
s2, carrying out slicing operation processing on the picture, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the picture, and splicing the feature maps of the convolution layers with different sizes together;
s3, fusing the features of different resolutions in the picture by adopting a bidirectional feature pyramid network to form multi-scale feature fusion, and merging the multi-scale feature fusion into a YOLO v5S network structure;
and S4, detecting the image with the fused features, and then outputting.
2. The method for detecting the target based on the YOLO v5S network structure as claimed in claim 1, wherein in step S3, the information of the P2 feature with high resolution is introduced into the multi-scale feature fusion.
3. The method as claimed in claim 2, wherein in step S4, CIOU _ Loss is used as a Loss function to detect and distinguish the relative position of the prediction box in each target box.
4. The method for detecting the target based on the YOLO v5S network structure of claim 3, wherein in step S1, the batch is taken out from the data set in the picture, 4 pictures are taken out from the batch at random, the random positions are cut and spliced to form a new picture, and the mosaic data enhancement processing is performed after the cycle.
5. The method for detecting the target based on the YOLO v5S network structure as claimed in claim 4, wherein in step S1, anchor frames with different initial lengths and widths are set in different data sets, during data training, a prediction frame is obtained on the basis of the initial anchor frame, the prediction frame is compared with a real frame, the difference between the prediction frame and the real frame is calculated, network structure parameters are updated iteratively in a reverse direction, and an optimal anchor frame value is obtained by performing adaptive anchor frame calculation.
6. The method of claim 5, wherein in step S1, the picture size is scaled during training in the YOLO v5S network structure.
7. The method of claim 6, wherein in step S2, the size of the 608x608x3 image is changed to 304x304x12 after the slicing operation, and then the convolution operation with 32 convolution kernels is performed to obtain the 304x304x32 feature map.
8. The method of claim 7, wherein in step S2, the convolutional layer feature maps are maximally pooled, and convolutional layer feature maps of different sizes are concatenated together.
9. A target detection system based on a YOLO v5s network structure, characterized by comprising:
the input module is used for acquiring an input picture, performing mosaic data enhancement processing on the picture, performing self-adaptive anchor frame calculation to obtain an optimal anchor frame value, and performing picture size processing and scaling to a fixed size;
the backbone network module is used for carrying out slicing operation processing on the pictures, carrying out convolution operation of 32 convolution kernels, carrying out pooling processing on the feature maps of the convolution layers of the pictures and splicing the feature maps of the convolution layers with different sizes;
the connection module is used for fusing the features with different resolutions in the picture by adopting a bidirectional feature pyramid network;
and the output module is used for outputting the picture after the features are fused.
CN202210528222.5A 2022-05-16 2022-05-16 Target detection method and system based on YOLO v5s network structure Pending CN115205518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210528222.5A CN115205518A (en) 2022-05-16 2022-05-16 Target detection method and system based on YOLO v5s network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210528222.5A CN115205518A (en) 2022-05-16 2022-05-16 Target detection method and system based on YOLO v5s network structure

Publications (1)

Publication Number Publication Date
CN115205518A true CN115205518A (en) 2022-10-18

Family

ID=83574653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210528222.5A Pending CN115205518A (en) 2022-05-16 2022-05-16 Target detection method and system based on YOLO v5s network structure

Country Status (1)

Country Link
CN (1) CN115205518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410140A (en) * 2022-11-02 2022-11-29 中国船舶集团有限公司第七〇七研究所 Image detection method, device, equipment and medium based on marine target

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410140A (en) * 2022-11-02 2022-11-29 中国船舶集团有限公司第七〇七研究所 Image detection method, device, equipment and medium based on marine target

Similar Documents

Publication Publication Date Title
CN111126472B (en) SSD (solid State disk) -based improved target detection method
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN112884064B (en) Target detection and identification method based on neural network
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN113505792B (en) Multi-scale semantic segmentation method and model for unbalanced remote sensing image
CN113128558B (en) Target detection method based on shallow space feature fusion and adaptive channel screening
CN113591795A (en) Lightweight face detection method and system based on mixed attention feature pyramid structure
CN111860683B (en) Target detection method based on feature fusion
CN113052834A (en) Pipeline defect detection method based on convolution neural network multi-scale features
CN113780132A (en) Lane line detection method based on convolutional neural network
CN111832453A (en) Unmanned scene real-time semantic segmentation method based on double-path deep neural network
CN115908772A (en) Target detection method and system based on Transformer and fusion attention mechanism
CN112070040A (en) Text line detection method for video subtitles
CN115527096A (en) Small target detection method based on improved YOLOv5
CN116977674A (en) Image matching method, related device, storage medium and program product
CN115205518A (en) Target detection method and system based on YOLO v5s network structure
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN116503709A (en) Vehicle detection method based on improved YOLOv5 in haze weather
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN115861595A (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device
CN112800952B (en) Marine organism identification method and system based on improved SSD algorithm
CN115375966A (en) Image countermeasure sample generation method and system based on joint loss function
CN116758363A (en) Weight self-adaption and task decoupling rotary target detector
CN115035408A (en) Unmanned aerial vehicle image tree species classification method based on transfer learning and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination