CN116309775A - Method for fusing depth completion network based on multiple bases with SLAM - Google Patents

Method for fusing depth completion network based on multiple bases with SLAM Download PDF

Info

Publication number
CN116309775A
CN116309775A CN202310184794.0A CN202310184794A CN116309775A CN 116309775 A CN116309775 A CN 116309775A CN 202310184794 A CN202310184794 A CN 202310184794A CN 116309775 A CN116309775 A CN 116309775A
Authority
CN
China
Prior art keywords
depth
slam
frame
base
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310184794.0A
Other languages
Chinese (zh)
Inventor
谢卫健
钱权浩
褚冠宜
章国锋
鲍虎军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310184794.0A priority Critical patent/CN116309775A/en
Publication of CN116309775A publication Critical patent/CN116309775A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for fusing a multi-base depth complement network with SLAM, which is characterized in that a multi-base depth complement predicted dense depth result is tightly coupled with SLAM, and the track error of SLAM is effectively optimized by using global depth consistency as constraint and improving the depth consistency of a map through correlating the multi-base dense depth of a key frame of an SLAM optimization window.

Description

Method for fusing depth completion network based on multiple bases with SLAM
Technical Field
The invention relates to the fields of computer vision and computer graphics, in particular to a method for fusing a multi-base-based depth complement network with SLAM.
Background
The depth information is important three-dimensional scene information, and is widely applied to application scenes such as robotics, autopilot AR, VR and the like. By means of depth information, the robot can conveniently realize tasks such as obstacle avoidance, grabbing and the like; in an AR/VR scene, occlusion, collision detection, etc. can be achieved using dense depth, thus achieving a more realistic AR/VR effect. In addition, the depth information and the visual information are fused, so that the problem that many pure visual methods cannot solve can be solved. For example, the visual SLAM is easy to have scale drift, and the accuracy of the SLAM system can be improved by adding depth information; most SLAM systems can only generate sparse point clouds or semi-dense maps, and can realize real-time dense reconstruction by combining dense depth.
Conventional depth information is obtained through sensors such as Kinect, realsense, but the use of these depth sensors generally brings about an increase in hardware cost, and different sensors are also limited by application scenarios. With the heat of deep learning, many efforts have been made to use neural networks to predict dense depths from images, as well as to fuse depth prediction networks with SLAM systems.
According to the difference of input data, the depth prediction work can be divided into the depth prediction work based on single frame images, the depth complement work based on sparse depth and the depth prediction work based on multi-frame images. Depth completion refers to the generation of a dense depth map from sparse depth maps. Compared with the method for predicting the depth directly from the RGB image, the depth complement can obtain the depth information with accurate and stable scale and has higher precision due to the prior sparse depth. Because of the accumulated error in the SLAM system, even if the three-dimensional point depth information of SLAM is used, the depth complement result of a single frame still has no way to ensure the consistency of the global depth precision. Only if the predicted depth information is added to the joint optimization of SLAM, the global consistency of the depth and the track can be better ensured. Conventional approaches assume that depth prediction is reliable with respect to depth, with only scale errors and depth drift. However, this model is very inaccurate and can easily make the optimization unstable. Another conventional approach is to use a variable self-encoder to optimize global depth using the results of network coding to add to the optimization of SLAM. However, each derivation of this method requires a network to make an inference, which affects system efficiency. And the fusion mode is difficult to improve the SLAM precision. The multi-layer depth base predicted by the multi-base depth complement network is added into the optimization flow of SLAM, and the purpose of optimizing the final depth is achieved by optimizing the weight of the multi-base depth.
Disclosure of Invention
In order to solve the existing problems in the technology, the invention provides a method for fusing a multi-base-based depth complement network with SLAM. The method comprises two processes of front-end tracking and back-end mapping, wherein the system receives the RGB images of the sequence as input, the front end is responsible for real-time tracking, and then part of frames are selected and sent to the back end to serve as key frames. And performing multi-base dense depth completion on the image sent to the back-end map, and then gradually optimizing the pose of the camera and the weights of the bases of the multi-base dense depths by aligning the predicted multi-base dense depths among the back-end key frames, and updating the map.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention provides a method for fusing a multi-base-based depth complement network with SLAM, which is characterized by comprising the following steps:
step 1: the SLAM in real-time operation receives RGB images as input, the front end of the SLAM is responsible for real-time tracking, and then partial frames are selected and sent to the rear end of the SLAM to serve as key frames;
step 2: the SLAM in real-time operation sends the current latest image of the rear-end key frame and the sparse depth corresponding to the key frame into a depth complement neural network based on multi-base fitting to carry out depth complement; if depth completion is successful, correlating dense depth to the key frame; if the complement does not fail, ignoring the current frame;
step 3: when the back end accumulates to enough key frames with dense depth information, an optimization window is established, and the weight coefficient of dense depth base and the current pose of SLAM are optimized through a multi-frame relative depth consistency algorithm.
In the step 1, a part of frames are selected and sent to the back end as key frames, specifically:
according to the front-end visual tracking state, if the number of feature points on the current frame and the last key frame which can be tracked is less than a certain threshold value, a new scene is considered to be reached, and the current frame is added into the rear end as the key frame;
alternatively, if there has been some time before adding a key frame relative to the previous key frame, the current frame is added to the backend as a key frame.
As a preferred scheme of the invention, the depth complement neural network based on multi-base fitting in the step 2 outputs a plurality of depth bases, and the final dense depth is obtained by weighted summation of the depth bases; and receiving an RGB image and a corresponding depth point cloud as input by a depth complement neural network based on multi-base fitting, predicting a plurality of depth bases as output, and solving the weight of each depth base according to the sparse depth structure optimization problem.
Preferably, the step 3 specifically includes:
the depth-complement dense depth result is obtained by weighting and summing a plurality of dense depth bases, and the depth prediction of the ith frame image is defined to obtain n depth bases which are recorded as
Figure BDA0004103350200000031
The weight coefficient corresponding to each depth base is +.>
Figure BDA0004103350200000032
Final depth D of the i-th frame i The method can be expressed as follows:
Figure BDA0004103350200000033
defining the camera pose of the ith frame as T i For the 2D point m on the ith frame image, the coordinates of the image coordinate system are as follows
Figure BDA0004103350200000034
Under the condition of known depth and camera pose, back projection is carried out to obtain the coordinates of the point m in a world coordinate system,
Figure BDA0004103350200000035
Figure BDA0004103350200000036
where pi is the projection matrix, pi -1 In the form of a back-projection matrix,
Figure BDA0004103350200000037
representation D i At->
Figure BDA0004103350200000038
Depth value of the location; subsequently, three-dimensional points are->
Figure BDA0004103350200000039
Projecting to the j-th frame, the image coordinates of the point on the j-th frame image can be obtained>
Figure BDA00041033502000000310
And corresponding depth->
Figure BDA00041033502000000311
Figure BDA00041033502000000312
Figure BDA00041033502000000313
Figure BDA00041033502000000314
Depth obtained by projection
Figure BDA00041033502000000315
Depth predicted from the j-th frame>
Figure BDA00041033502000000316
Residual term r of relative depth constraint can be obtained m
Figure BDA00041033502000000317
In addition to the relative depth constraint, to optimize stability, a depth base initial constraint is added,
Figure BDA00041033502000000318
wherein the method comprises the steps of
Figure BDA00041033502000000319
The weight initial value of the kth base of the ith frame; and adding the relative depth constraint and the depth base initial value constraint into BA optimization of SLAM to realize joint optimization of depth base weight, SLAM key frame pose and map points.
Compared with the prior art, the invention has the advantages that:
1) The invention adopts a depth completion network based on multiple bases to carry out depth prediction, and represents single-frame depth information by multiple base weights, and realizes the joint optimization of depth, SLAM pose and point cloud by adding the weights of all depth bases into the optimization of an SLAM system. In the previous method for integrating the depth completion network and the SLAM system based on the variation automatic encoder, a decoder network is used for deriving and generating a new depth map to realize the deriving step during each optimization. The method only needs to make one inference to obtain the depth base, and the network inference is not relied on in the subsequent optimization process, so that the optimization efficiency is higher.
2) According to the method, the weight of the depth base is used as a variable to be added and optimized, the number of variables required to be optimized for each frame of image introduction is small, the parameter quantity to be optimized is smaller than that of the previous method based on the variational automatic encoder, and the solving efficiency is higher.
3) The invention adds the relative depth consistency error between the key frames, can continuously optimize the depth of the key frames and the pose of the key frames simultaneously through BA optimization of SLAM, and can obtain higher depth precision compared with the prior method.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a diagram showing the reconstruction effect of the present invention; the left side is the point cloud of the SLAM real-time track, and the right side is the reconstruction result.
FIG. 3 is a graph comparing the reconstruction effects before and after the optimization of the present invention, wherein the left side is the reconstruction result without using the method of the present invention, and the right side is the reconstruction result using the method of the present invention.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings. The technical features of the embodiments of the invention can be combined correspondingly on the premise of no mutual conflict.
Referring to fig. 1, the method of the present invention includes two processes of front-end tracking and back-end mapping, the system receives a sequence of RGB images as input, runs a real-time SLAM, and then selects a part of frames from a sliding window of the SLAM to send to the back-end as key frames. And performing multi-base dense depth completion on the key frame images sent into the back-end building map, and then gradually optimizing the pose of the camera and the weights of the bases of the multi-base dense depths by aligning the predicted multi-base dense depths among the back-end key frames, and updating the map.
As shown in fig. 1, in one embodiment of the present invention, the method of the present invention comprises the steps of:
step 1: the SLAM in real-time operation receives RGB images as input, the front end of the SLAM is responsible for real-time tracking, and then a part of frames are selected and sent to the rear end of the SLAM as key frames. In order to record the passed scene and map information, some historical frames are needed to be stored in the map as key frames, and in order to avoid excessive memory occupation caused by serious map redundancy, the key frames are needed to be selected according to a certain strategy. The common strategy is that according to the front-end visual tracking state, if the number of feature points on the current frame and the last key frame which can be tracked is less than a certain threshold, a new scene is considered to be reached, and the current frame is added into the rear end as the key frame;
alternatively, there are also key frames added based on a temporal relationship, for example, if there has been a certain time without adding a key frame with respect to the last key frame, the current frame is added as a key frame to the back end. In practice, the key frame selection policy may be managed by a plurality of conditions.
Step 2: SLAM in real-time operation sends the current latest image of the rear end key frame and the sparse depth corresponding to the key frame into a depth-complement neural network (Qu C, nguyen T, taylor C.Depth completion via deep basis fitting [ C ]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2020:71-80.) based on multi-base fitting for depth complement; if depth completion is successful, correlating dense depth to the key frame; if the complement does not fail, ignoring the current frame;
the depth complement neural network based on the multi-base fitting is different from a network which directly predicts dense depth as final output, the depth complement neural network based on the multi-base fitting outputs a plurality of depth bases, and the final dense depth is obtained by weighting and summing the depth bases; the depth complement neural network based on multi-base fitting receives an RGB image and corresponding sparse depth as input, predicts a plurality of depth bases as output, and can solve the optimization problem according to the sparse depth structure, and can continuously optimize the final depth by continuously adjusting the weights of the depth bases.
Step 3: when the back end accumulates to enough key frames with dense depth information, an optimization window is established, and the weight coefficient of dense depth base and the current pose of SLAM are optimized through a multi-frame relative depth consistency algorithm. In a preferred embodiment, the decocting step 3 of the present invention specifically comprises:
the depth-complement dense depth result is obtained by weighting and summing a plurality of dense depth bases, and the depth prediction of the ith frame image is defined to obtain n depth bases which are recorded as
Figure BDA0004103350200000051
The weight coefficient corresponding to each depth base is +.>
Figure BDA0004103350200000052
Final depth D of the i-th frame i The method can be expressed as follows:
Figure BDA0004103350200000053
defining the camera pose of the ith frame as T i For the 2D point m on the ith frame image, the coordinates of the image coordinate system are as follows
Figure BDA0004103350200000061
Under the condition of known depth and camera pose, back projection is carried out to obtain the coordinates of the point m in a world coordinate system,
Figure BDA0004103350200000062
Figure BDA0004103350200000063
where pi is the projection matrix, pi -1 In the form of a back-projection matrix,
Figure BDA0004103350200000064
representation D i At->
Figure BDA0004103350200000065
Depth value of the location; subsequently, three-dimensional points are->
Figure BDA0004103350200000066
Projecting to the j-th frame, the image coordinates of the point on the j-th frame image can be obtained>
Figure BDA0004103350200000067
And corresponding depth
Figure BDA0004103350200000068
Figure BDA0004103350200000069
Figure BDA00041033502000000610
Figure BDA00041033502000000611
Depth obtained by projection
Figure BDA00041033502000000612
Depth predicted from the j-th frame>
Figure BDA00041033502000000613
Residual term r of relative depth constraint can be obtained m
Figure BDA00041033502000000614
In addition to the relative depth constraint, to optimize stability, a depth base initial constraint is added,
Figure BDA00041033502000000615
wherein the method comprises the steps of
Figure BDA00041033502000000616
The weight initial value of the kth base of the ith frame; and adding the relative depth constraint and the depth base initial value constraint into BA optimization of SLAM to realize joint optimization of depth base weight, SLAM key frame pose and map points.
Description of the preferred embodiments
FIG. 1 is a schematic flow chart of the method of the present invention, wherein sparse depth points and images generated by key frames in a sliding window are sent into a multi-base-based depth complement network, the network predicts to obtain depth bases corresponding to the images, and the depth bases are combined with the sparse depth points to optimize the depth of the key frames. After the key frames in the window slide out of the window, the key frames added into the map are added into BA optimization at the rear end, and the joint optimization of the key frame pose and the dense depth is realized through BA optimization, so that the key frame pose and the dense depth with higher precision are obtained. Finally we can use TSDF dense reconstruction algorithms to generate dense models using key frame pose and dense depth, as shown in fig. 2. FIG. 2 is a diagram showing the reconstruction effect of the present invention; the left side is the point cloud of the SLAM real-time track, the right side is a reconstruction result, and the sparse point cloud can be complemented into a dense model through the method.
The invention is experimentally compared with the previous work on the Euroc dataset, and the depth precision obtained by the method and the precision of camera pose estimation are obviously improved compared with the previous method.
Fig. 3 is a graph comparing the reconstruction effects before and after the optimization of the present invention, wherein the left side is the reconstruction result obtained by optimizing the depth result without using the method of the present invention, and the right side is the reconstruction result using the method of the present invention. It can be seen that our approach can achieve a model with higher consistency (wall appearance of the model on the left is layered).
The invention can be applied to various SLAM/SFM systems, the sparse point cloud generated by the SLAM/SFM algorithm is complemented into dense depth by utilizing a multi-base depth complement neural network, and the obtained dense depth can be used for generating more complex AR and VR effects such as shielding, collision and the like; the method can also be used for dense three-dimensional reconstruction, and compared with a point cloud model, the dense three-dimensional model has higher utilization value, such as three-dimensional object tracking and the like. The foregoing list is only illustrative of specific embodiments of the invention. Obviously, the invention is not limited to the above embodiments, but many variations are possible. All modifications directly derived or suggested to one skilled in the art from the present disclosure should be considered as being within the scope of the present invention.

Claims (4)

1. A method of fusing a multi-base based depth completion network with SLAM, comprising the steps of:
step 1: the SLAM in real-time operation receives RGB images as input, the front end of the SLAM is responsible for real-time tracking, and then partial frames are selected and sent to the rear end of the SLAM to serve as key frames;
step 2: the SLAM in real-time operation sends the current latest image of the rear-end key frame and the sparse depth corresponding to the key frame into a depth complement neural network based on multi-base fitting to carry out depth complement; if depth completion is successful, correlating dense depth to the key frame; if the complement does not fail, ignoring the current frame;
step 3: when the back end accumulates to enough key frames with dense depth information, an optimization window is established, and the weight coefficient of dense depth base and the current pose of SLAM are optimized through a multi-frame relative depth consistency algorithm.
2. The method for merging the multi-base depth completion network with the SLAM according to claim 1, wherein in the step 1, a part of frames are selected to be sent to the back end as key frames, specifically:
according to the front-end visual tracking state, if the number of feature points on the current frame and the last key frame which can be tracked is less than a certain threshold value, a new scene is considered to be reached, and the current frame is added into the rear end as the key frame;
alternatively, if there has been some time before adding a key frame relative to the previous key frame, the current frame is added to the backend as a key frame.
3. The method of fusing a multi-base based depth completion network with SLAM according to claim 1, wherein the multi-base fitting based depth completion neural network in step 2 outputs a plurality of depth bases from which the final dense depth is obtained by weighted summation; and receiving an RGB image and a corresponding depth point cloud as input by a depth complement neural network based on multi-base fitting, predicting a plurality of depth bases as output, and solving the weight of each depth base according to the sparse depth structure optimization problem.
4. The method for fusing a depth complement neural network based on multi-basis fitting with SLAM according to claim 1, wherein the step 3 specifically comprises:
the depth-complement dense depth result is obtained by weighting and summing a plurality of dense depth bases, and the depth prediction of the ith frame image is defined to obtain n depth bases which are recorded as
Figure FDA0004103350190000011
The weight coefficient corresponding to each depth base is +.>
Figure FDA0004103350190000012
Final depth D of the i-th frame i The method can be expressed as follows:
Figure FDA0004103350190000013
defining the camera pose of the ith frame as T i For the 2D point m on the ith frame image, the coordinates of the image coordinate system are as follows
Figure FDA0004103350190000021
Under the condition of known depth and camera pose, back projection is carried out to obtain the coordinates of the point m in a world coordinate system,
Figure FDA0004103350190000022
Figure FDA0004103350190000023
wherein pi is projectionMatrix, pi -1 In the form of a back-projection matrix,
Figure FDA0004103350190000024
representation D i At->
Figure FDA0004103350190000025
Depth value of the location; subsequently, three-dimensional points are->
Figure FDA0004103350190000026
Projecting to the j-th frame, the image coordinates of the point on the j-th frame image can be obtained>
Figure FDA0004103350190000027
And corresponding depth->
Figure FDA0004103350190000028
Figure FDA0004103350190000029
Figure FDA00041033501900000210
Figure FDA00041033501900000211
Depth obtained by projection
Figure FDA00041033501900000212
Depth predicted from the j-th frame>
Figure FDA00041033501900000213
Residual term r of relative depth constraint can be obtained m
Figure FDA00041033501900000214
In addition to the relative depth constraint, to optimize stability, a depth base initial constraint is added,
Figure FDA00041033501900000215
wherein the method comprises the steps of
Figure FDA00041033501900000216
The weight initial value of the kth base of the ith frame; and adding the relative depth constraint and the depth base initial value constraint into BA optimization of SLAM to realize joint optimization of depth base weight, SLAM key frame pose and map points.
CN202310184794.0A 2023-03-01 2023-03-01 Method for fusing depth completion network based on multiple bases with SLAM Pending CN116309775A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310184794.0A CN116309775A (en) 2023-03-01 2023-03-01 Method for fusing depth completion network based on multiple bases with SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310184794.0A CN116309775A (en) 2023-03-01 2023-03-01 Method for fusing depth completion network based on multiple bases with SLAM

Publications (1)

Publication Number Publication Date
CN116309775A true CN116309775A (en) 2023-06-23

Family

ID=86817944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310184794.0A Pending CN116309775A (en) 2023-03-01 2023-03-01 Method for fusing depth completion network based on multiple bases with SLAM

Country Status (1)

Country Link
CN (1) CN116309775A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456124A (en) * 2023-12-26 2024-01-26 浙江大学 Dense SLAM method based on back-to-back binocular fisheye camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456124A (en) * 2023-12-26 2024-01-26 浙江大学 Dense SLAM method based on back-to-back binocular fisheye camera
CN117456124B (en) * 2023-12-26 2024-03-26 浙江大学 Dense SLAM method based on back-to-back binocular fisheye camera

Similar Documents

Publication Publication Date Title
CN110631554B (en) Robot posture determining method and device, robot and readable storage medium
CN108765481B (en) Monocular video depth estimation method, device, terminal and storage medium
CN112132893B (en) Visual SLAM method suitable for indoor dynamic environment
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
CN110782494A (en) Visual SLAM method based on point-line fusion
Saputra et al. Learning monocular visual odometry through geometry-aware curriculum learning
CN112418288B (en) GMS and motion detection-based dynamic vision SLAM method
US20220398845A1 (en) Method and device for selecting keyframe based on motion state
KR20190021257A (en) RGB-D camera based tracking system and method thereof
CN113256698B (en) Monocular 3D reconstruction method with depth prediction
CN110706269B (en) Binocular vision SLAM-based dynamic scene dense modeling method
EP3293700B1 (en) 3d reconstruction for vehicle
CN113468950A (en) Multi-target tracking method based on deep learning in unmanned driving scene
CN111899280A (en) Monocular vision odometer method adopting deep learning and mixed pose estimation
CN106952304A (en) A kind of depth image computational methods of utilization video sequence interframe correlation
CN116309775A (en) Method for fusing depth completion network based on multiple bases with SLAM
CN115457086A (en) Multi-target tracking algorithm based on binocular vision and Kalman filtering
Sehgal et al. Lidar-monocular visual odometry with genetic algorithm for parameter optimization
CN110428461B (en) Monocular SLAM method and device combined with deep learning
Ke et al. Deep multi-view depth estimation with predicted uncertainty
CN111709984A (en) Pose depth prediction method, visual odometer method, device, equipment and medium
Baur et al. Real-time 3D LiDAR flow for autonomous vehicles
CN112348033B (en) Collaborative saliency target detection method
CN115965961B (en) Local-global multi-mode fusion method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination