CN105096341A - Mobile robot pose estimation method based on trifocal tensor and key frame strategy - Google Patents
Mobile robot pose estimation method based on trifocal tensor and key frame strategy Download PDFInfo
- Publication number
- CN105096341A CN105096341A CN201510445644.6A CN201510445644A CN105096341A CN 105096341 A CN105096341 A CN 105096341A CN 201510445644 A CN201510445644 A CN 201510445644A CN 105096341 A CN105096341 A CN 105096341A
- Authority
- CN
- China
- Prior art keywords
- pose
- view
- mobile robot
- key frame
- trifocal tensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000000007 visual effect Effects 0.000 claims abstract description 26
- 238000005259 measurement Methods 0.000 claims abstract description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 abstract description 2
- 230000009466 transformation Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000011217 control strategy Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
Abstract
The invention provides a mobile robot pose estimation method based on a trifocal tensor and a key frame strategy aiming at a visual servo tracking control problem. Aiming at a general scene, the trifocal tensor is used to describe a geometrical relationship among an initial view, a current view and a final view. Based on this, pose information of the robot containing an unknown proportion factor under the final view is acquired. For a visual servo task with a large scope, based on a key frame method, a pose of the robot is estimated. A matching characteristic point existing among the current view, the initial view and the final view is not needed and continuous pose measurement under a global coordinate is obtained. A visual servo work space of the robot is greatly expanded. The obtained pose information can be widely used for a design of a visual servo controller.
Description
Technical Field
The invention belongs to the crossing field of robot and computer vision, relates to a mobile robot pose estimation problem based on vision, and particularly relates to a mobile robot pose estimation method based on trifocal tensor and key frame strategies.
Background
With the rapid development of the robot technology, the robot plays an important role in practice, and the tasks undertaken by the robot are more complicated and diversified. Particularly, for a mobile robot, the conventional method is usually based on position measuring equipment such as a GPS and a odometer to control the mobile robot, the control accuracy of the method is limited by the accuracy of sensing equipment, the flexibility of the task is poor, a reference track of the robot in a physical space needs to be given, and the implementation difficulty and cost are increased. The vision servo control utilizes the vision information as feedback to carry out non-contact measurement on the environment, utilizes larger information quantity, improves the flexibility and the accuracy of the robot system, and has irreplaceable effect in the robot control.
Classical visual servo control methods can be divided into methods based on position, image and geometric constraints, which are mainly reflected by different error system construction modes. According to the previous research progress of the robot vision servo in the aspects of vision system and control strategy, automatic science report 2015,41(5):861-873, the image and geometric constraint-based method has better robustness to image noise and camera calibration errors, needs less prior knowledge and has wider application range. However, for most visual servoing systems, it is generally required that the target feature points are at least partially within the field of view during control, in order to construct effective visual feedback. However, since the field of view of the camera is limited, and the existing image feature extraction method is not good in repeatability and accuracy when large rotation/translation occurs, the accuracy of the visual servo system is affected, and the working space of the visual servo system is limited. In addition, most mobile robots have incomplete constraints for them, i.e. three degrees of spatial freedom, but only two degrees of control freedom. In the past, researchers (Salarisp, FontaneliD, Pallotino L, et. Shortestpath for object with non-holonomic field-of-viewconstraints, IEEETransactionson Robotics,2010,26(2):269-281) proposed a trajectory planning strategy for ensuring that the target is visible for the mobile robot with incomplete constraints, but required a certain priori knowledge, and the planned path is often tortuous, which actually reduces the working efficiency of the robot.
The commonly used multi-view geometric constraints in visual servoing mainly include homography, epipolar geometry, trifocal tensor and the like, and the multi-view geometric-based visual servoing generally involves estimation of the pose of a robot. The homography-based visual servo usually combines three-dimensional spatial information and image information to construct an error system, but a multi-solution problem (four groups of solutions) exists when a homography matrix is decomposed, prior knowledge of a target plane is often needed, and feature points are often required to be coplanar for solving convenience; the method based on epipolar geometry estimates the relative posture of the robot by using the pole information between two views, but the two views are adjacent to each other, and the two views have singularity when the two poles are adjusted to zero, so that the two views can only be ensured to be identical in direction and collinear, and the robot needs to be switched to other control strategies to be adjusted to the target pose. The method based on the trifocal tensor estimates the posture of the robot by utilizing the corresponding relation of the three views, is irrelevant to the structure of a scene, is only relevant to the relative posture between the views, and has better universality. However, compared to the two-view geometry approach of homography and epipolar geometry, the trifocal tensor requires matching feature points in the three views, and imposes stronger constraints on the field of view. In addition, the repeatability and precision of the method for extracting and matching the feature points are not good when rotation and translation transformation occurs, the precision of a visual servo system is influenced, and the working space of the visual servo of the robot is limited.
Disclosure of Invention
Aiming at overcoming the defects of the prior art, the invention provides a mobile robot pose estimation method based on the trifocal tensor and the key frame strategy, a visual servo tracking system and a mobile robot according to the method, aiming at the problem of visual servo tracking control.
A mobile robot pose estimation method based on trifocal tensor and key frame strategy is used for a visual track tracking task of a mobile robot and is shot by the mobile robot in a teaching processSelecting a key frame from the expected track by taking the environment image as the expected track, and then constructing a geometric model of the three views based on the trifocal tensor so as to extract pose information; the three views are a current view C and any two other views C0,C*。
The geometric model construction process is that for the current view C, the current view C is calculated and two other views C are used0,C*The trifocal tensor, thereby obtaining the current view relative to C*The pose information of (1) includes angle information and position information including unknown scale factors.
Selecting an image sequence containing an initial view and a final view from the expected track as key frames, wherein the adjacent key frames have a matching degree of a preset threshold; in the pose estimation process, the trifocal tensor between the current view and the two most matched key frames is calculated to obtain pose information relative to the most matched key frames, and then the pose information is iteratively transformed to the final view to obtain continuous pose measurement under the global coordinate.
A visual servo tracking system employing a method according to any one of the methods.
A mobile robot employing the method according to any one of the preceding claims.
The invention has the beneficial effects that:
aiming at a large-range visual servo task, matched feature points among a current view, an initial view and a final view are not needed, so that the working space of the visual servo of the robot is greatly expanded; because the matching rate close to the preset threshold value is formed between the adjacent key frames, the calculation of the trifocal tensor in the pose estimation algorithm based on the key frames is more accurate, and the accuracy of the system is improved.
Drawings
FIG. 1 is a schematic diagram of a visual servo tracking task;
FIG. 2 is a schematic diagram of the trifocal tensor;
FIG. 3 is a schematic diagram of keyframe based pose estimation;
FIG. 4 is a schematic diagram of pose transformation between key frames.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Aiming at a generalized scene, the invention describes the geometric relationship among an initial view, a current view and a final view by using the trifocal tensor, and obtains the pose information of the robot containing unknown scale factors based on the geometric relationship. Aiming at the problem of large-scale visual servo tracking, the characteristic that the trifocal tensor is irrelevant to the environment structure is fully utilized, a key frame-based attitude estimation method is provided, the attitude information of the current view relative to two most similar key frames is calculated, then the pose information is iteratively transformed to the final view, and continuous attitude measurement under the global coordinate is obtained. A mobile robot pose estimation method based on trifocal tensor is mainly used for the track tracking problem based on vision. The problems are generally divided into two links of teaching and tracking. As shown in fig. 1, in the teaching link, the robot takes an image of the environment as an expected track during the movement; in the tracking link, the robot tracks the expected image based on visual feedback so as to complete the tracking of the expected track, and the tracking process mainly comprises two parts, namely pose estimation and control. The invention provides a pose estimation method containing unknown scale factors, and then the prior method is utilized to easily design a corresponding self-adaptive controller to complete a track tracking task, such as (ChenJ, DixonWE, DawsonDM, equivalent. Homography-based visual tracking control of electromagnetic mobile robot [ J ]. Robotics, IEEETransactionson,2006,22(2): 406-). 415.). Aiming at the pose estimation of the mobile robot, the pose estimation method mainly comprises two parts of geometric model construction and pose estimation based on key frames. 1. Geometric model
For instance, three jiao Zhang in FIG. 2Quantity { T }i}i=1,2,3The interrelationship between the three views is described and is independent of the environment structure. For three images I0,I,I*Of (2) corresponding feature point x0,x,x*Having the following relationship:
wherein,respectively represent x0,x,x*I, j, k elements of (1), TiqrRepresents TiThe elements of the q-th row and the r-th column,jqs,krtto rank symbols, the following are defined:
in particular, for a planar mobile robot, consider the initial view C0Current view C, and final view C*The relationship (2) of (c). At C*Establishing a global coordinate system, then view C0Can be expressed as (x)0,z0,θ0) The coordinates of the current view may be expressed as (x, z, θ). Definition of R0,t0Is C0To C*Rotation and translation of (1), definition R, t is C to C*Rotation and translation of (1), three views C, C0,C*The pose transformation relationship between the two is as follows:
r, R0Is represented by R ═ R1r2r3],R0=[r01r02r03]Let t, t0Is expressed as t ═ tx0tz]T,t0=[t0x0t0z]TThen, the trifocal tensor and the relative pose information have the following relationship:
substituting the pose transformation relation to obtain an expression of the relative pose of the trifocal tensor with respect to the three views, wherein the nonzero elements are as follows, TijkRepresents TiRow j and column k:
T111=t0xcosθ-txcosθn,T113=t0zcosθ+txsinθ0,
T131=-t0xsinθ-tzcosθ0,T133=-t0zsinθ+tzsinθ0,
T212=-tx,T221=-t0x,T223=t0z,T232=-tz,
T311=t0xsinθ-txsinθ0,T313=t0zsinθ-txcosθ0,
T331=t0xcosθ-tzsinθ0,T333=t0zcosθ-tzcosθ0.
based on the images taken under the three views, the trifocal tensor for the unknown scale factor can be determined. Definition ofIs C0To C*The distance of (c). In general, C0And C*Do not coincide with the origin of coordinates of (i.e. do not coincide with each other)The trifocal tensor is normalized as follows:
where α is a sign variable, may be represented by sign (T)221)sign(t0x) Or sign (T)223)sign(t0z) And (4) determining. During control t0x,t0zThe sign of (A) is not changed, and can be obtained by using priori knowledge in the teaching process.
From the above derivation, define tzm=tx/d,tzm=tzThe/d is the position information of the unknown scale factor, and can be calculated according to the normalized trifocal tensor to obtain:
the rotation angle θ of the robot can be determined by the following equation:
definition of xm=x/d,zmZ/d is coordinate information of unknown proportion, which can be calculated by the following formula:
in the control system, (t) may be usedxm,tzmθ) or (x)m,zmAnd theta) expresses the current pose information of the robot. In addition, given a sequence of images, the desired trajectory can be calculated using the same method.
2. Keyframe based pose estimation
As shown in fig. 3, the key frame-based pose estimation main idea is to select a series of key frames in the expected trajectory, so that there are more matching points between adjacent key frames. The pose information of the current frame is calculated based on the two most similar key frames, and then iterative transformation is carried out to obtain the pose information under the final view. The key frame strategy mainly comprises two parts of key frame selection and pose estimation.
2.1 Key frame selection
The key frame selection process is given as a sequence of images C on the desired trajectorydSelect a series of key frames CkAnd (4) enabling the first key frame to be an initial view, enabling the last key frame to be a final view, and enabling the matching rate of two adjacent key frames to be close to the threshold value tau. Wherein a frame C is definedtAnd key frame CpThe matching rate between the two views is the number of matched characteristic points in the two views and CpThe ratio of the number of feature points is extracted. The selection process is as follows:
(1) c is to be0Adding { Ck},i=1;
(2) Consider { CdThe next frame C int;
(3) If C is presenttAnd Ck(i) Is less than tau, CtAdding { Ck},i=i+1;
(4) If C is presenttIs not C*And then returning to the step (2);
(5) if C is present*Out of { CkIn the front, thenC is to be*Adding { Ck}。
2.2 keyframe-based pose estimation
Given a current frame C, the idea of pose estimation is to calculate pose information relative to two most similar key frames, and then iteratively transform to a final view C*The specific process is as follows:
(1) selecting a key frame C most similar to the current frame based on the matching ratek(i-1),Ck(i);
(2) Calculating Ck(i-1),C,Ck(i) The trifocal tensor T in between;
(3) relative to Ck(i) Position and orientation information X ofm(xm,i,zm,i,θi);
(4) For { CkIn Ck(i) The subsequent key frame is subjected to iterative pose information transformation until C*。
As shown in fig. 4, the pose transformation method between two adjacent key frames is as follows:
(1) calculating Ck(j-1),Ck(k+1),Ck(j) The trifocal tensor T;
(2) calculating according to T
(3) Computing
(4) Computing
In the posture transformation, the trifocal tensor information among the key frames can be calculated off line in advance, so that the time efficiency is improved. In addition, the trifocal tensor is computed more accurately because there are more matching points between adjacent keyframes. And as the robot is closer to the target, the accumulated error of pose transformation is smaller. In addition, the unknown scale factor in the global pose information obtained based on the key frame strategy measurement is the distance between the last two key frames, and the unknown scale factor is kept unchanged in the robot motion process.
Claims (5)
1. A mobile robot pose estimation method based on trifocal tensor and key frame strategy is characterized by being used for a visual track tracking task of a mobile robot, utilizing an environment image shot by the mobile robot in a teaching process as an expected track, selecting a key frame from the expected track, and then carrying out geometric model construction on three views based on the trifocal tensor so as to extract pose information; the three views are a current view C and any two other views C0,C*。
2. The method of claim 1, wherein the geometric model is constructed by computing the current view C and the other two views C0,C*The trifocal tensor, thereby obtaining the current view relative to C*The pose information of (1) includes angle information and position information including unknown scale factors.
3. The method according to claim 2, wherein an image sequence including an initial view and a final view selected from the desired trajectory is used as a key frame, and adjacent key frames have a matching degree of a preset threshold; in the pose estimation process, the trifocal tensor between the current view and the two most matched key frames is calculated to obtain pose information relative to the most matched key frames, and then the pose information is iteratively transformed to the final view to obtain continuous pose measurement under the global coordinate.
4. A visual servo tracking system employing the method according to any of claims 1-4.
5. A mobile robot employing the method according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510445644.6A CN105096341B (en) | 2015-07-27 | 2015-07-27 | Mobile robot position and orientation estimation method based on trifocal tensor and key frame strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510445644.6A CN105096341B (en) | 2015-07-27 | 2015-07-27 | Mobile robot position and orientation estimation method based on trifocal tensor and key frame strategy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105096341A true CN105096341A (en) | 2015-11-25 |
CN105096341B CN105096341B (en) | 2018-04-17 |
Family
ID=54576679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510445644.6A Active CN105096341B (en) | 2015-07-27 | 2015-07-27 | Mobile robot position and orientation estimation method based on trifocal tensor and key frame strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105096341B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105867428A (en) * | 2016-05-18 | 2016-08-17 | 南京航空航天大学 | Formation changing method for multi-robot system on basis of multiple movement models and multi-view geometry |
CN105955271A (en) * | 2016-05-18 | 2016-09-21 | 南京航空航天大学 | Multi-robot coordinated motion method based on multi-view geometry |
CN106708048A (en) * | 2016-12-22 | 2017-05-24 | 清华大学 | Ceiling image positioning method of robot and ceiling image positioning system thereof |
CN107300100A (en) * | 2017-05-22 | 2017-10-27 | 浙江大学 | A kind of tandem type mechanical arm vision guide approach method of Online CA D model-drivens |
CN107871327A (en) * | 2017-10-23 | 2018-04-03 | 武汉大学 | The monocular camera pose estimation of feature based dotted line and optimization method and system |
CN109211277A (en) * | 2018-10-31 | 2019-01-15 | 北京旷视科技有限公司 | The state of vision inertia odometer determines method, apparatus and electronic equipment |
CN109816687A (en) * | 2017-11-20 | 2019-05-28 | 天津工业大学 | The concurrent depth identification of wheeled mobile robot visual servo track following |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063638A1 (en) * | 2010-09-10 | 2012-03-15 | Honda Motor Co., Ltd. | Egomotion using assorted features |
CN102682448A (en) * | 2012-03-14 | 2012-09-19 | 浙江大学 | Stereo vision rapid navigation and positioning method based on double trifocal tensors |
-
2015
- 2015-07-27 CN CN201510445644.6A patent/CN105096341B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063638A1 (en) * | 2010-09-10 | 2012-03-15 | Honda Motor Co., Ltd. | Egomotion using assorted features |
CN102682448A (en) * | 2012-03-14 | 2012-09-19 | 浙江大学 | Stereo vision rapid navigation and positioning method based on double trifocal tensors |
Non-Patent Citations (4)
Title |
---|
DEON SABATTA ET AL.: "Vision-based Path Following using the 1D Trifocal Tensor", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 * |
H.M.BECERRA ET AL.: "Omnidirectional visual control of mobile robots based on the 1D trifocal tensor", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 * |
刘勇: "基于影像的运动平台自定位测姿", 《中国博士学位论文全文数据库(基础科学辑)》 * |
贾丙西 等: "机器人视觉伺服研究进展:视觉系统与控制策略", 《自动化学报》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105867428A (en) * | 2016-05-18 | 2016-08-17 | 南京航空航天大学 | Formation changing method for multi-robot system on basis of multiple movement models and multi-view geometry |
CN105955271A (en) * | 2016-05-18 | 2016-09-21 | 南京航空航天大学 | Multi-robot coordinated motion method based on multi-view geometry |
CN105867428B (en) * | 2016-05-18 | 2017-07-11 | 南京航空航天大学 | Multi-robot system order switching method based on mode multiple views geometry of doing more physical exercises |
CN105955271B (en) * | 2016-05-18 | 2019-01-11 | 南京航空航天大学 | A kind of multi-robot coordination movement technique based on multiple views geometry |
CN106708048A (en) * | 2016-12-22 | 2017-05-24 | 清华大学 | Ceiling image positioning method of robot and ceiling image positioning system thereof |
CN106708048B (en) * | 2016-12-22 | 2023-11-28 | 清华大学 | Ceiling image positioning method and system for robot |
CN107300100A (en) * | 2017-05-22 | 2017-10-27 | 浙江大学 | A kind of tandem type mechanical arm vision guide approach method of Online CA D model-drivens |
CN107871327A (en) * | 2017-10-23 | 2018-04-03 | 武汉大学 | The monocular camera pose estimation of feature based dotted line and optimization method and system |
CN109816687A (en) * | 2017-11-20 | 2019-05-28 | 天津工业大学 | The concurrent depth identification of wheeled mobile robot visual servo track following |
CN109211277A (en) * | 2018-10-31 | 2019-01-15 | 北京旷视科技有限公司 | The state of vision inertia odometer determines method, apparatus and electronic equipment |
CN109211277B (en) * | 2018-10-31 | 2021-11-16 | 北京旷视科技有限公司 | State determination method and device of visual inertial odometer and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN105096341B (en) | 2018-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105096341B (en) | Mobile robot position and orientation estimation method based on trifocal tensor and key frame strategy | |
CN107301654B (en) | Multi-sensor high-precision instant positioning and mapping method | |
JP6781771B2 (en) | Simultaneous positioning map creation using indicators Navigation methods, devices and systems | |
CN105021124B (en) | A kind of planar part three-dimensional position and normal vector computational methods based on depth map | |
Chung et al. | Indoor intelligent mobile robot localization using fuzzy compensation and Kalman filter to fuse the data of gyroscope and magnetometer | |
CN104246825B (en) | For the method and apparatus of the on-line calibration of vehicle camera | |
Meng et al. | Iterative-learning error compensation for autonomous parking of mobile manipulator in harsh industrial environment | |
Gong et al. | An uncalibrated visual servo method based on projective homography | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
Ye et al. | Keypoint-based LiDAR-camera online calibration with robust geometric network | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
CN114998276B (en) | Robot dynamic obstacle real-time detection method based on three-dimensional point cloud | |
Gao et al. | Complex workpiece positioning system with nonrigid registration method for 6-DoFs automatic spray painting robot | |
Nilsson et al. | Reliable vehicle pose estimation using vision and a single-track model | |
Lin et al. | Fast, robust and accurate posture detection algorithm based on Kalman filter and SSD for AGV | |
Du et al. | A novel human–manipulators interface using hybrid sensors with Kalman filter and particle filter | |
Jiang et al. | Modeling of unbounded long-range drift in visual odometry | |
Dubbelman et al. | Bias reduction for stereo based motion estimation with applications to large scale visual odometry | |
Feng et al. | A fusion algorithm of visual odometry based on feature-based method and direct method | |
Jung et al. | Gaussian Mixture Midway-Merge for Object SLAM With Pose Ambiguity | |
CN109375159A (en) | Pure orientation Weighted Constraint total least square localization method | |
Saeedi et al. | 3D localization and tracking in unknown environments | |
Ming et al. | A real-time monocular visual SLAM based on the bundle adjustment with adaptive robust kernel | |
Wang et al. | Combination of the ICP and the PSO for 3D-SLAM | |
Zhang et al. | Calibration-free and model-independent method for high-DOF image-based visual servoing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |