CN116030255A - System and method for three-dimensional point cloud semantic segmentation - Google Patents

System and method for three-dimensional point cloud semantic segmentation Download PDF

Info

Publication number
CN116030255A
CN116030255A CN202310080218.1A CN202310080218A CN116030255A CN 116030255 A CN116030255 A CN 116030255A CN 202310080218 A CN202310080218 A CN 202310080218A CN 116030255 A CN116030255 A CN 116030255A
Authority
CN
China
Prior art keywords
point cloud
point
feature
module
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310080218.1A
Other languages
Chinese (zh)
Inventor
李�浩
刘晓龙
邱晨阳
王稳
陈亦敏
李信衍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN202310080218.1A priority Critical patent/CN116030255A/en
Publication of CN116030255A publication Critical patent/CN116030255A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for three-dimensional point cloud semantic segmentation, comprising an independent feature extraction module, a preprocessing module, a grouping module, a field feature extraction module and a feature fusion module; the independent feature extraction module extracts global features of the point cloud, the preprocessing module obtains a center point of the point cloud through a furthest point sampling method, and then the point cloud in a fixed radius is selected according to the center point; the grouping module is responsible for reestablishing the point cloud set obtained by pretreatment according to the expansion coefficient; the domain feature extraction module is used for extracting features of the point cloud, and the feature fusion module is used for fusing the features extracted by each channel with the global features. According to the invention, the lightweight network is adopted to extract global characteristics of the point cloud, a grouping module is constructed to greatly reduce the node number of the point cloud image structure, the training period is shortened, meanwhile, the grouping module enriches the point cloud information, expands the receptive field of the point cloud, and is greatly helpful for extracting the characteristics of the sparse point cloud.

Description

System and method for three-dimensional point cloud semantic segmentation
Technical Field
The invention relates to the technical field of three-dimensional point cloud segmentation, in particular to a system and a method for three-dimensional point cloud semantic segmentation.
Background
The point cloud is unstructured data, the graph structure is very suitable for processing irregular data structures such as the point cloud, and in recent years, a graph convolution-based method is vigorously developed in point cloud classification and segmentation. Most of the current methods based on graph convolution firstly construct an original point cloud into a graph structure, learn characteristics of points and edges in a spatial domain or a spectral domain, and therefore describe local geometric characteristics of the three-dimensional point cloud. However, building all point clouds into a graph structure can result in a long training period, and meanwhile, the graph convolution method cannot accurately divide object boundaries in a three-dimensional point cloud semantic segmentation task, so that the finally obtained object types are inaccurate.
The existing three-dimensional point cloud segmentation technology has the following defects: 1. the training period is too long: the current method for most graph convolution is to build a graph structure from all original point clouds, and then obtain point cloud features through graph convolution. When facing a point cloud of a large scene, such a graph structure is too bulky, the training period is particularly long, and the requirements on the performance of the device are high. 2. The segmentation effect on sparse point cloud is poor: in the process of collecting point clouds, the closer the point clouds are to the collecting device, the denser the point clouds are, and the more the point clouds are to the collecting device, the rarefaction is. Most of the point cloud segmentation technologies at present do not consider the near-dense and far-sparse properties of the point cloud, so that the segmentation effect on the sparse point cloud is poor. 3. The point cloud categories at the object junction cannot be accurately distinguished: the features learned by most of the current graph convolution methods are susceptible to adjacent objects and it is difficult to divide object boundaries.
Therefore, a new three-dimensional point cloud segmentation technique is needed to solve the above three problems.
Disclosure of Invention
The invention aims at: aiming at the defects of the prior art, a three-dimensional point cloud segmentation system and a three-dimensional point cloud segmentation method based on graph attention convolution are provided, and the problems that training period is overlong, sparse point cloud characteristics can be captured and object boundaries can not be accurately divided in the three-dimensional point cloud semantic segmentation process are solved.
The technical scheme of the invention is as follows:
the invention discloses a three-dimensional point cloud semantic segmentation system, which comprises an independent feature extraction module, a preprocessing module, a grouping module, a field feature extraction module and a feature fusion module; the independent feature extraction module extracts global features of the point cloud, the preprocessing module obtains a center point of the point cloud through a furthest point sampling method, and then the point cloud in a fixed radius is selected according to the center point; the grouping module is responsible for reestablishing the point cloud set obtained by pretreatment according to the expansion coefficient; the domain feature extraction module is used for extracting features of the point cloud, and the feature fusion module is used for fusing the features extracted by each channel and the global features.
Further, the domain feature extraction module comprises 6 network layers, layers 1 to 4 consist of a graph attention convolution layer and a graph pooling layer, and layers 5 to 6 consist of feature interpolation and jump connection layers; all convolution layers are processed using a batch normalization method and activated by an ELU function.
The invention also discloses a three-dimensional point cloud semantic segmentation method, which specifically comprises the following steps:
step 1, inputting an original point cloud into an independent feature extraction module, and performing global feature extraction operation on the original point cloud;
step 2, grouping the original point cloud by adopting a furthest point sampling method and ball inquiry to obtain a local area of the point cloud;
step 3, grouping the point clouds obtained through the pretreatment in the step 1 and the step 2, and dividing the point clouds into three channels according to K neighbor values;
and 4, inputting the point cloud of each channel into a constructed domain feature extraction module to extract the local features of the point cloud.
Further, the specific operation method in the step 1 is as follows: inputting the original point cloud into an independent feature extraction module, aligning all the point clouds, then mapping the aligned point features into a high-dimensional space by using a multi-layer perceptron, and finally reserving the global features of the point clouds by using a maximum pooling operation.
Further, the furthest point sampling method in the step 2: randomly setting a point as a central point, selecting the point farthest from the central point, adding the central point, and continuously iterating to finally obtain the number of the central points which are determined in advance; and after all the center points are obtained, taking the center point as a sphere center to obtain the point cloud ball with the radius r.
Further, the input points are clouded p= (P) in the step 3 1 ,p 2 ,...,p n )∈R 3 Constructing a point cloud diagram structure G= (V, E) according to neighborhood information, wherein V= {1,2,.,. N } represents the top points of the diagram, N represents the number of point clouds, and E represents edges between points; definition M (k,d) (v) As an expansion point cloud set of the vertex v, k represents the number of adjacent points, and d represents the expansion ratio; defining an expansion KNN algorithm for point cloud structure, by skipping every d neighboring points to return k nearest points in k x d neighborhood point regions, assuming { u } 1 ,u 2 ,...,u k*d The expansion point cloud with the expansion ratio d of the vertex v obtained by the expansion KNN algorithm is { u } 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d -the formula is defined as follows:
M (k,d) (v)={u 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d }。
further, in the step 4, the domain feature extraction module includes a graph attention convolution method: in the encoding process, feature is extracted from point cloud graph structures with different scales by using a graph convolution neural network, a attention mechanism is introduced in the convolution process, reasonable weight is distributed for each field point cloud, and then the point cloud resolution in each feature channel is reduced by realizing downsampling through graph pooling; in the decoding process, the extracted local characteristics of the Point cloud are optimized by adopting a characteristic interpolation and jump connection mode in a mode similar to the Point Net++ characteristic extraction mode.
Further, the specific method for pooling the graph comprises the following steps: by h' l As a point cloud structure output feature on the first scale of the graph pyramid, a pooling formula on the first+1 scale is defined as follows:
h v =pooling{h′ j :j∈N l (v)},h v ∈H l+1
wherein N is l (v) A neighbor of vertex v representing the first scale, H representing the feature, H l+1 Representing the feature set on the 1+1st scale.
Further, the pooling of the graph is achieved by using maximum pooling.
Further, the specific method for characteristic interpolation comprises the following steps: performing feature interpolation operation on the learned graph features on different scales, wherein the interpolation scale corresponds to the scale in the learning process, and H is used for the interpolation l Feature set learned on the first scale in' representative point cloud image structure, p l And p l-1 The first scale and the first-1 scale are respectively a space coordinate set; then need to be at p l Searching p in collection l-1 And calculating a feature weighted sum according to the spatial distance of each side of the n nearest neighbor vertexes in the set, and obtaining the feature of the first-1 level scale.
Compared with the prior art, the invention has the beneficial effects that:
1. the method solves the problems that the training period is too long, the sparse point cloud characteristics can be captured and the object boundary can not be accurately divided in the three-dimensional point cloud semantic segmentation process. According to the invention, the lightweight network is adopted to extract global characteristics of the point cloud, a grouping module is constructed to greatly reduce the node number of the point cloud image structure, the training period is shortened, meanwhile, the grouping module enriches the point cloud information, expands the receptive field of the point cloud, and is greatly helpful for extracting the characteristics of the sparse point cloud.
2. The method comprises the steps of determining the point cloud boundary position according to the local feature of the point cloud, and determining the local feature of the point cloud according to the local feature of the point cloud.
3. The invention is responsible for reestablishing the point cloud set obtained by pretreatment according to the expansion coefficient through the grouping module, so that the number of nodes in the point cloud picture structure can be greatly reduced.
Drawings
FIG. 1 is a flow chart of a three-dimensional point segmentation method according to the present invention;
FIG. 2 is a flow chart of a three-dimensional point cloud semantic segmentation method based on graph attention convolution;
FIG. 3 is a schematic diagram of a grouping process of point clouds according to the present invention;
fig. 4 is a block diagram illustrating a feature extraction module according to the present invention.
Detailed Description
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The technical scheme of the invention is further described in detail below with reference to the examples.
As shown in fig. 1, the invention discloses a three-dimensional point cloud semantic segmentation system, which comprises the following specific operation steps: the method mainly comprises five modules, namely an independent feature extraction module, a preprocessing module, a grouping module, a field feature extraction module and a feature fusion module. The independent feature extraction module is used for extracting global features of the point cloud. The preprocessing module obtains a center point of the point cloud set through the furthest point sampling method, and then selects the point cloud in a fixed radius according to the center point. The grouping module is responsible for reestablishing the point cloud set obtained by preprocessing according to the expansion coefficient, so that the number of nodes in the point cloud picture structure can be greatly reduced. The domain feature extraction module is used for extracting features of the point cloud. The feature fusion module is used for fusing the features extracted by each channel and the global features.
As shown in fig. 2, the invention discloses a three-dimensional point cloud semantic segmentation method, which specifically comprises the following steps:
1. the method comprises the steps of inputting an original point cloud into an independent feature extraction module, performing input transformation operation on the original point cloud, namely aligning all point clouds, mapping the aligned point features into a high-dimensional space by using a multi-layer perceptron, and reserving global features of the point clouds by using a maximum pooling operation.
2. And grouping the original point cloud by adopting a furthest point sampling method and ball inquiry to obtain a local area of the point cloud. In the sampling process, a point is randomly set as a center point, then the point farthest from the center point is selected to be added into the center point, iteration is continuously carried out, and finally the number of the center points is obtained. And after all the center points are obtained, taking the center point as a sphere center to obtain the point cloud ball with the radius r.
3. And grouping point clouds obtained through preprocessing in the two steps, and dividing the point clouds into three channels according to K neighbor values. The input points are clouded p= (P 1 ,p 2 ,...,p n )∈R 3 Constructing a point cloud diagram structure G= (V, E) according to neighborhood information, wherein V= {1,2,.,. N } represents the top points of the diagram, N represents the number of point clouds, and E represents edges between points; definition M (k,d) (v) As an expansion point cloud set of the vertex v, k represents the number of adjacent points, and d represents the expansion ratio; an expanded KNN algorithm for the point cloud structure is defined that returns k nearest points in the k x d neighborhood point region by skipping every d neighboring points. Assume { u } 1 ,u 2 ,...,u k*d The expansion point cloud with the expansion ratio d of the vertex v obtained by the expansion KNN algorithm is{u 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d -the formula is defined as follows:
M (k,d) (v)={u 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d }
and inputting the point cloud of each channel into a constructed domain feature extraction module to extract the local features of the point cloud. The core of the domain feature extraction module is a graph annotation force convolution method, the domain feature extraction is a coding and decoding process, in the coding process, a graph convolution neural network is used for extracting features for point cloud graph structures with different scales, an attention mechanism is introduced in the convolution process, reasonable weights are distributed for each domain point cloud, and then downsampling is realized through graph pooling to reduce the point cloud resolution in each feature channel; in the decoding process, the extracted local characteristics of the Point cloud are optimized by adopting a characteristic interpolation and jump connection mode in a mode similar to the Point Net++ characteristic extraction mode.
As shown in fig. 3, a grouping process of point clouds is illustrated: first, the points around the center point are sorted by distance, and when the expansion ratio is 1, the features of the center point can be calculated by the neighboring points with the distance sequence of 1,2,3,4, as shown in fig. 3 (a), when the expansion ratio is 2, the features of the center point can be calculated by the neighboring points with the distance sequence of 1,3,5,7, as shown in fig. 3 (b), and when the expansion ratio is 4, the features of the center point can be calculated by the neighboring points with the distance sequence of 1,5,9,13, as shown in fig. 3 (c).
As shown in fig. 4, the domain feature extraction module comprises 8 network layers, layers 1 to 6 consist of a graph annotation force convolution layer and a graph pooling layer, and layers 7 to 8 consist of feature interpolation and jump connection layers; all convolution layers are processed using a batch normalization method and activated by an ELU function.
Pooling is an operation in convolutional neural networks that follows the convolution, essentially to reduce the number of parameters, preserve the main features, and remove redundant information. Similarly, the pooling is performed by taking the result of the upper layer of the graph meaning convolution as the input of the pooling, and taking the result after pooling as the input of the lower layer of the graph meaning convolution. The purpose of pooling is to preserve the upper layer domainThe main characteristics of the point reduce the characteristic resolution of the point cloud and accelerate the characteristic extraction. The graph annotation force convolution output characteristic of the first layer is marked as h' l The pooling formula at layer l+1 is defined as follows:
h v =pooling{h′ j :j∈N l (v)},h v ∈H l+1
wherein N is l (v) A neighbor of vertex v representing the first scale, H representing the feature, H l+1 Representing the feature set on the 1+1st scale, this pooling is achieved with maximum pooling.
In order to acquire the characteristics identical to the original input scale, the characteristics of the graph learned on various different scales are required to be subjected to characteristic interpolation operation, and the interpolation scale corresponds to the scale in the learning process; suppose H l ' is the feature set learned on the first scale in the point cloud structure, p l And p l-1 The first scale and the first-1 scale are respectively a space coordinate set; then need to be at p l Searching p in collection l-1 N nearest neighbor vertexes in the set, and calculating a feature weighted sum according to the spatial distance of each side of the n nearest neighbor vertexes, so that features of a first-1 level scale can be obtained, and feature interpolation operation is realized.
The detailed description of the present application is specific and detailed, but is not intended to limit the scope of the application in any way. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, which fall within the protection scope of the present application.

Claims (10)

1. The three-dimensional point cloud semantic segmentation system is characterized by comprising an independent feature extraction module, a preprocessing module, a grouping module, a field feature extraction module and a feature fusion module; the independent feature extraction module extracts global features of the point cloud, the preprocessing module obtains a center point of the point cloud through a furthest point sampling method, and then the point cloud in a fixed radius is selected according to the center point; the grouping module is responsible for reestablishing the point cloud set obtained by pretreatment according to the expansion coefficient; the domain feature extraction module is used for extracting features of the point cloud, and the feature fusion module is used for fusing the features extracted by each channel and the global features.
2. The system for three-dimensional point cloud semantic segmentation according to claim 1, wherein the domain feature extraction module comprises 8 network layers, layers 1 to 6 consist of a graph annotation force convolution layer and a graph pooling layer, and layers 7 to 8 consist of feature interpolation and jump connection layers; all convolution layers are processed using a batch normalization method and activated by an ELU function.
3. The three-dimensional point cloud semantic segmentation method is characterized by comprising the following steps of:
step 1, performing global feature extraction operation on an original point cloud;
step 2, grouping the original point cloud by adopting a furthest point sampling method and a ball query algorithm to obtain a local area of the point cloud;
step 3, grouping the point clouds obtained by preprocessing, and dividing the point clouds into three channels according to K neighbor values;
and 4, extracting the local characteristics of the point cloud of each channel.
4. The method for semantic segmentation of three-dimensional point cloud according to claim 3, wherein the specific operation method in step 1 is as follows: inputting the original point cloud into an independent feature extraction module, aligning all the point clouds, then mapping the aligned point features into a high-dimensional space by using a multi-layer perceptron, and finally reserving the global features of the point clouds by using a maximum pooling operation.
5. A method of three-dimensional point cloud semantic segmentation according to claim 3, wherein the furthest point sampling method in step 2: randomly setting a point as a central point, selecting the point farthest from the central point, adding the central point, and continuously iterating to finally obtain the number of the central points which are determined in advance; and after all the center points are obtained, taking the center point as a sphere center to obtain the point cloud ball with the radius r.
6. A method according to claim 3, wherein in step 3, the input points are collected in a cloud p= (P 1 ,p 2 ,...,p n )∈R 3 Constructing a point cloud diagram structure G= (V, E) according to neighborhood information, wherein V= {1,2,.,. N } represents the top points of the diagram, N represents the number of point clouds, and E represents edges between points; definition M (k,d) (v) As an expansion point cloud set of the vertex v, k represents the number of adjacent points, and d represents the expansion ratio; defining an expansion KNN algorithm for point cloud structure, by skipping every d neighboring points to return k nearest points in k x d neighborhood point regions, assuming { u } 1 ,u 2 ,...,u k*d The expansion point cloud with the expansion ratio d of the vertex v obtained by the expansion KNN algorithm is { u } 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d -the formula is defined as follows:
M (k,d) (v)={u 1 ,u 1+d ,u 1+2d ,...,u 1+(k-1)d }。
7. a method for three-dimensional point cloud semantic segmentation according to claim 3, wherein in step 4, the domain feature extraction module comprises a graph attention convolution method: in the encoding process, feature is extracted from point cloud graph structures with different scales by using a graph convolution neural network, a attention mechanism is introduced in the convolution process, reasonable weight is distributed for each field point cloud, and then the point cloud resolution in each feature channel is reduced by realizing downsampling through graph pooling; in the decoding process, the extracted local characteristics of the Point cloud are optimized by adopting a characteristic interpolation and jump connection mode in a mode similar to the Point Net++ characteristic extraction mode.
8. The method for three-dimensional point cloud semantic segmentation according to claim 7, wherein the specific method for pooling is as follows: by h' l As a point cloud structure output feature on the first scale of the graph pyramid, a pooling formula on the first+1 scale is defined as follows:
h v =pooling{h' j :j∈N l (v)},h v ∈H l+1
wherein N is l (v) A neighbor of vertex v representing the first scale, H representing the feature, H l+1 Representing the feature set on the 1+1st scale.
9. The method of claim 8, wherein the pooling is performed using maximum pooling.
10. The method for semantic segmentation of a three-dimensional point cloud according to claim 7, wherein the specific method for feature interpolation is as follows: performing feature interpolation operation on the learned graph features on different scales, wherein the interpolation scale corresponds to the scale in the learning process, and H is used for the interpolation l Feature set learned on the first scale in' representative point cloud image structure, p l And p l-1 The first scale and the first-1 scale are respectively a space coordinate set;
then need to be at p l Searching p in collection l-1 And calculating a feature weighted sum according to the spatial distance of each side of the n nearest neighbor vertexes in the set, and obtaining the feature of the first-1 level scale.
CN202310080218.1A 2023-01-17 2023-01-17 System and method for three-dimensional point cloud semantic segmentation Pending CN116030255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310080218.1A CN116030255A (en) 2023-01-17 2023-01-17 System and method for three-dimensional point cloud semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310080218.1A CN116030255A (en) 2023-01-17 2023-01-17 System and method for three-dimensional point cloud semantic segmentation

Publications (1)

Publication Number Publication Date
CN116030255A true CN116030255A (en) 2023-04-28

Family

ID=86081060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310080218.1A Pending CN116030255A (en) 2023-01-17 2023-01-17 System and method for three-dimensional point cloud semantic segmentation

Country Status (1)

Country Link
CN (1) CN116030255A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117553807A (en) * 2024-01-12 2024-02-13 湘潭大学 Automatic driving navigation method and system based on laser radar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117553807A (en) * 2024-01-12 2024-02-13 湘潭大学 Automatic driving navigation method and system based on laser radar
CN117553807B (en) * 2024-01-12 2024-03-22 湘潭大学 Automatic driving navigation method and system based on laser radar

Similar Documents

Publication Publication Date Title
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN111539370B (en) Image pedestrian re-identification method and system based on multi-attention joint learning
CN110533048B (en) Realization method and system of combined semantic hierarchical connection model based on panoramic area scene perception
CN111489358A (en) Three-dimensional point cloud semantic segmentation method based on deep learning
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN111626128A (en) Improved YOLOv 3-based pedestrian detection method in orchard environment
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN111027559A (en) Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111027140B (en) Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN112257637A (en) Vehicle-mounted laser point cloud multi-target identification method integrating point cloud and multiple views
CN109919084B (en) Pedestrian re-identification method based on depth multi-index hash
CN110543581A (en) Multi-view three-dimensional model retrieval method based on non-local graph convolution network
CN112861970B (en) Fine-grained image classification method based on feature fusion
CN112651316A (en) Two-dimensional and three-dimensional multi-person attitude estimation system and method
CN115830179A (en) Class-independent remote sensing ground object vector topological structure extraction method
CN116030255A (en) System and method for three-dimensional point cloud semantic segmentation
CN114359902B (en) Three-dimensional point cloud semantic segmentation method based on multi-scale feature fusion
CN115424017A (en) Building internal and external contour segmentation method, device and storage medium
CN114373099A (en) Three-dimensional point cloud classification method based on sparse graph convolution
CN116704137B (en) Reverse modeling method for point cloud deep learning of offshore oil drilling platform
CN116524197B (en) Point cloud segmentation method, device and equipment combining edge points and depth network
CN113435461A (en) Point cloud local feature extraction method, device, equipment and storage medium
CN114511787A (en) Neural network-based remote sensing image ground feature information generation method and system
CN116129118A (en) Urban scene laser LiDAR point cloud semantic segmentation method based on graph convolution
CN116386042A (en) Point cloud semantic segmentation model based on three-dimensional pooling spatial attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination