CN117631547B - Landing control method for quadruped robot under irregular weak gravitational field of small celestial body - Google Patents

Landing control method for quadruped robot under irregular weak gravitational field of small celestial body Download PDF

Info

Publication number
CN117631547B
CN117631547B CN202410112248.0A CN202410112248A CN117631547B CN 117631547 B CN117631547 B CN 117631547B CN 202410112248 A CN202410112248 A CN 202410112248A CN 117631547 B CN117631547 B CN 117631547B
Authority
CN
China
Prior art keywords
robot
training
reinforcement learning
model
celestial body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410112248.0A
Other languages
Chinese (zh)
Other versions
CN117631547A (en
Inventor
齐骥
苏桓立
冯文煜
高海波
霍明英
于海涛
韩亮亮
邓宗全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202410112248.0A priority Critical patent/CN117631547B/en
Publication of CN117631547A publication Critical patent/CN117631547A/en
Application granted granted Critical
Publication of CN117631547B publication Critical patent/CN117631547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, which belongs to the technical field of robots and comprises the following steps: step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information; step 2, importing a robot model description file into a dynamics simulation engine; step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; and 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling the robot to finish the aerial gesture adjustment and landing through the trained controller. The invention provides a landing control method of a quadruped robot under an irregular weak gravitational field of a small celestial body, which can effectively avoid the problem that a robot dynamics model is difficult to accurately establish under the irregular gravitational field of the small celestial body.

Description

Landing control method for quadruped robot under irregular weak gravitational field of small celestial body
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a landing control method of a quadruped robot under a small celestial body irregular weak gravity field.
Background
After moon detection and Mars detection, the detection of planets, comets and other small celestial bodies is a research hot spot in the current international deep space detection field, and is also an important development direction of subsequent deep space detection in China.
In order to more accurately understand the information of the shape, size, mineral distribution, soil mechanics, surface topography, dust dynamics, etc., it is necessary to conduct a multi-point long-term detection on the surface of the asteroid. Because the gravitational acceleration of the surface of the celestial body is small, the traditional wheel type detection vehicle and other schemes are not suitable for the celestial body. However, the jumping walking scheme has more advantages in a weak gravity field environment, and the detector can easily jump over obstacles with sizes tens of times of the obstacles in the jumping movement process, so that the jumping walking scheme is suitable for the ground environment with complex asteroid surfaces.
Six-legged robots and four-legged robots are mature in the existing legged robots. The hexapod robot has high bearing capacity, good stability and simple control method, but is not suitable for deep space exploration due to the large dead weight. The quadruped robot has moderate bearing capacity and stability, and the jumping motion and posture adjustment mode of the quadruped robot imitating the natural world is more suitable for the weak gravitational field environment of the small celestial body.
In the first deployment of the robot and the landing process of each jump, each joint of the robot needs to be precisely controlled so as to achieve the purposes of soft landing and no secondary jump. The traditional four-foot robot control is a model-based control scheme, an accurate dynamic model is difficult to build in the environment of the small celestial body irregular gravitational field, and a model-free control algorithm based on reinforcement learning has better adaptability.
Disclosure of Invention
The invention aims to provide a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, and solves the problems of high calculation force requirement, dependence on an accurate model and poor universality of the control method in the prior art.
In order to achieve the above purpose, the invention provides a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, comprising the following steps:
Step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
Step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; the reinforcement learning environment comprises an action space Observation space/>Reward function/>Training initialization setting;
and 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling the robot to finish the aerial gesture adjustment and landing through the trained controller.
Preferably, the action spaceThe instantaneous control quantity of each joint of the four-foot robot is represented; action space/>From the positions of joints/>And maximum output moment of each joint/>Composition is prepared.
Preferably, the observation spaceThe method comprises the steps of including environment information and robot state information in the current state; the environment information in the current state is the gravitational field information of the celestial body, and the robot state information comprises the positions of all joints, the rotation angular velocity of all joints, the attitude angle of the body, the body velocity, the rotation angular velocity of the body and the body height.
Preferably, the reward functionThe reward function is mapped/>, by the values of the k task targetsAnd bonus weight/>The sum of products forms, the task targets comprise a main line task target and a branch line task target which are expected to be achieved by the controller, the reward weight of the main line task target is higher than that of the branch line task target, and the magnitude of numerical mapping of the task targets, the difficulty of achieving the task and the expected time sequence of the task are considered when the reward weight is designed.
Preferably, the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed.
Preferably, an Actor-Critic architecture is adopted in a controller neural network structure designed based on a near-end strategy optimization reinforcement learning algorithm, wherein an action network and an evaluation network both adopt the same neural network model, each of the neural network comprises a plurality of perceptron neural networks with at least two hidden layers, each layer comprises 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh.
Preferably, training is performed using reinforcement learning library pytorch according to the set training hyper-parameters, and the training process is recorded using reinforcement learning visualization tool tensorboard.
Preferably, after the training is finished, a well-trained control model is tested by using a randomly generated initialization environment, and if the model control effect does not reach the expectation, the change curves of the q value and the loss value in the training process recorded by tensorboard are observed, so that the reward function and the super-parameters are adjusted, and the training is repeated.
Therefore, the landing control method of the quadruped robot under the irregular weak gravitational field of the celestial body has the following beneficial effects:
(1) The conventional detector requires the installation of a reaction flywheel as a posture adjustment device. The four-foot robot is utilized to change the position of the mass center of the robot by swinging the legs so as to generate the gesture adjustment torque, an additional gesture adjustment device is not needed, and the emission load in deep space detection is saved;
(2) The model-free control algorithm based on reinforcement learning effectively avoids the problem that the robot dynamics model is difficult to accurately establish under the irregular attraction field of the celestial body. The requirement on the computational power of the controller when the trained model is applied is far lower than that of the traditional MPC and other algorithms based on model control, and the control model can stably finish the tasks of the four-foot robot such as air gesture adjustment and soft landing.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flowchart of a training iteration of a reinforcement learning model based on near-end strategy optimization;
FIG. 2 is a schematic view of a four-legged robot according to the present invention for adjusting the posture of a body by swinging legs;
FIG. 3 is a graph showing the body posture test effect of the four-foot robot regulated by the control model trained by the invention.
Detailed Description
The following detailed description of the embodiments of the invention, provided in the accompanying drawings, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-3, a landing control method for a quadruped robot under a small celestial body irregular weak gravity field includes the following steps:
Step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
Step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
Step 3, establishing a reinforcement learning environment based on a reinforcement learning framework proposed by a company OpenAI, wherein the reinforcement learning environment comprises an action space, an observation space, a reward function and training initialization settings, and the concrete contents are as follows:
3-1: defining a motion space of a quadruped robot To represent the instantaneous control quantity of each joint of the robot. Action space/>From the positions of joints/>Maximum output moment of each joint/>Composition;
3-2: definition of the four-legged robot observation space The robot control method comprises the steps of including environment information and robot state information in the current state. In the invention, the environment information is the gravitational field information of the celestial body, and the robot state information comprises the position of each joint, the rotation angular velocity of each joint, the attitude angle of the body, the body speed, the rotation angular velocity of the body and the body height;
3-3: designing objective rewarding function according to task content The rewarding function is composed of the sum of the products of the numerical mapping of k task targets and the rewarding weights; the task targets comprise main line task targets and auxiliary training branch line task targets which are expected to be achieved by the controller, and the reward weight of the main line task targets is higher than that of the branch line task targets in general, and the size of numerical mapping of the task targets, the difficulty of achieving the tasks and the expected time sequence of the tasks are considered when the reward weight is designed;
3-4: the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed;
And 4, designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm. The invention adopts an Actor-Critic architecture, wherein an action network and an evaluation network adopt the same neural network model, each of which consists of at least two layers of hidden layers of multi-layer perceptron neural networks, each layer contains 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh;
step 5, setting training super parameters as shown in table 1, wherein specific parameter values are required to be set according to specific working conditions;
Table 1 near-end strategy optimization reinforcement learning algorithm training hyper-parameters
;
And 6, training by using the reinforcement learning training framework pytorch according to the training super parameters set in the step 5, and recording the training process by using tensorboard.
And 7, testing the trained control model by using the randomly generated initialization environment after training. If the model control effect does not reach the expected value, the reward function and the hyper-parameters need to be adjusted and retrained by observing the change curves of the q value and the loss value in the training process recorded in the step 6.
Examples
A simulation working condition and a four-foot robot control model are built in a dynamics simulation engine, a neural network model of a controller is trained through reinforcement learning, and an iterative flow chart of the reinforcement learning model training is shown in fig. 1. The neural network model can take the observation information of the robot to the environment and the state of the robot as input, and output the current optimal control quantity to achieve the target effect. The invention mainly aims at gesture adjustment and landing control in the process of robot deployment and jumping, and specifically comprises the following steps:
in the aerial gesture adjusting part, according to the law of conservation of angular momentum, the quadruped robot can change the position of the mass center of the robot body by swinging the legs to generate a rotating moment so as to adjust the gesture angle of the robot body, as shown in fig. 2. The method ensures that the four-foot robot can complete gesture control only by relying on the motor of the leg joint of the robot, and a reaction flywheel is not required to be additionally arranged.
In the landing control part, the kinetic energy of the robot during landing is absorbed by accurately controlling the positions and output torque of each joint of the four-foot robot, so that the joint motor plays a role in landing buffer.
The gesture adjustment and landing control are combined as a whole to train, so that the whole landing process is smoother and more natural. In short, the robot does not adjust the gesture to the ideal landing gesture set by human, but automatically adjusts the gesture to the gesture most suitable for landing according to the self information and the environmental information, and then lands, as shown in fig. 3, the body gesture test effect of the quadruped robot is adjusted by the control model obtained by training.
Therefore, the landing control method of the quadruped robot under the irregular weak gravitational field of the small celestial body is adopted, so that the quadruped robot can complete gesture control only by means of the leg joint motor of the quadruped robot, and a reaction flywheel is not required to be additionally arranged; the kinetic energy of the robot during landing can be absorbed by accurately controlling the positions and output torque of each joint of the four-foot robot, so that the joint motor plays a role in landing buffer. The complete gesture adjustment and landing are realized by adopting a model-free control algorithm based on near-end strategy optimization reinforcement learning. The invention can effectively avoid the problem that the robot dynamics model is difficult to accurately establish under the irregular gravitational field of the celestial body, and reduces the calculation force requirement on the controller.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (1)

1. The landing control method of the quadruped robot under the irregular weak gravitational field of the celestial body is characterized by comprising the following steps:
Step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
Step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; the reinforcement learning environment comprises an action space Observation space/>Reward function/>Training initialization setting;
Step 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling a robot to finish air gesture adjustment and landing through the trained controller;
Action space The instantaneous control quantity of each joint of the four-foot robot is represented; action space/>From the positions of joints/>And maximum output moment of each joint/>Composition;
Observation space The method comprises the steps of including environment information and robot state information in the current state; the environment information in the current state is the gravitational field information of the celestial body, and the robot state information comprises the position of each joint, the rotation angular velocity of each joint, the attitude angle of the body, the body velocity, the rotation angular velocity of the body and the body height;
Reward function The reward function is mapped/>, by the values of the k task targetsAnd bonus weight/>The product sum is formed, the task targets comprise main line task targets and auxiliary training branch line task targets, the reward weight of the main line task targets is higher than that of the branch line task targets, and the magnitude of numerical mapping of the task targets, the difficulty of achieving the tasks and the expected time sequence of the tasks are considered when the reward weight is designed;
the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed;
Designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm, wherein an Actor-Critic architecture is adopted in the controller neural network structure, an action network and an evaluation network both adopt the same neural network model, each of the neural network structure is composed of multiple layers of perceptron neural networks with at least two hidden layers, each layer contains 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh;
training using a reinforcement learning library pytorch according to the set training hyper-parameters, and recording a training process using reinforcement learning visualization tool tensorboard;
after training, testing a trained control model by using a randomly generated initialization environment, if the model control effect does not reach the expected value, observing a q value and a loss value change curve recorded by tensorboard in the training process, wherein the q value is a desired value of rewards when taking action in the current state, and adjusting a rewarding function and a super parameter and retraining.
CN202410112248.0A 2024-01-26 2024-01-26 Landing control method for quadruped robot under irregular weak gravitational field of small celestial body Active CN117631547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410112248.0A CN117631547B (en) 2024-01-26 2024-01-26 Landing control method for quadruped robot under irregular weak gravitational field of small celestial body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410112248.0A CN117631547B (en) 2024-01-26 2024-01-26 Landing control method for quadruped robot under irregular weak gravitational field of small celestial body

Publications (2)

Publication Number Publication Date
CN117631547A CN117631547A (en) 2024-03-01
CN117631547B true CN117631547B (en) 2024-04-26

Family

ID=90036049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410112248.0A Active CN117631547B (en) 2024-01-26 2024-01-26 Landing control method for quadruped robot under irregular weak gravitational field of small celestial body

Country Status (1)

Country Link
CN (1) CN117631547B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2384863A2 (en) * 2010-01-21 2011-11-09 Institutul de Mecanica Solidelor al Academiei Romane Method and device for dynamic control of a walking robot
CN102968124A (en) * 2012-11-29 2013-03-13 北京理工大学 Model uncertain boundary-based planet landing trajectory tracking robust control method
CN107065571A (en) * 2017-06-06 2017-08-18 上海航天控制技术研究所 A kind of objects outside Earth soft landing Guidance and control method based on machine learning algorithm
CN108537404A (en) * 2018-03-06 2018-09-14 中国人民解放军63920部队 A kind of objects outside Earth detection sample region workability appraisal procedure, medium and equipment
AU2018101292A4 (en) * 2018-09-05 2018-10-11 He, Zhenguang Mr A segmented head-body hexapod robot
CN111762339A (en) * 2020-06-30 2020-10-13 哈尔滨工业大学 Online machine learning control method for vehicle wheels of star probe vehicle
CN113821057A (en) * 2021-10-14 2021-12-21 哈尔滨工业大学 Planetary soft landing control method and system based on reinforcement learning and storage medium
CN114859911A (en) * 2022-04-28 2022-08-05 云南红岭云科技股份有限公司 Four-legged robot path planning method based on DRL
WO2022241808A1 (en) * 2021-05-19 2022-11-24 广州中国科学院先进技术研究所 Multi-robot trajectory planning method
CN116125815A (en) * 2023-02-23 2023-05-16 北京理工大学 Intelligent cooperative control method for small celestial body flexible lander
CN116400589A (en) * 2023-03-06 2023-07-07 北京理工大学 Intelligent control method for asteroid flexible detector of deep reinforcement learning SAC algorithm
CN116627041A (en) * 2023-07-19 2023-08-22 江西机电职业技术学院 Control method for motion of four-foot robot based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2384863A2 (en) * 2010-01-21 2011-11-09 Institutul de Mecanica Solidelor al Academiei Romane Method and device for dynamic control of a walking robot
CN102968124A (en) * 2012-11-29 2013-03-13 北京理工大学 Model uncertain boundary-based planet landing trajectory tracking robust control method
CN107065571A (en) * 2017-06-06 2017-08-18 上海航天控制技术研究所 A kind of objects outside Earth soft landing Guidance and control method based on machine learning algorithm
CN108537404A (en) * 2018-03-06 2018-09-14 中国人民解放军63920部队 A kind of objects outside Earth detection sample region workability appraisal procedure, medium and equipment
AU2018101292A4 (en) * 2018-09-05 2018-10-11 He, Zhenguang Mr A segmented head-body hexapod robot
CN111762339A (en) * 2020-06-30 2020-10-13 哈尔滨工业大学 Online machine learning control method for vehicle wheels of star probe vehicle
WO2022241808A1 (en) * 2021-05-19 2022-11-24 广州中国科学院先进技术研究所 Multi-robot trajectory planning method
CN113821057A (en) * 2021-10-14 2021-12-21 哈尔滨工业大学 Planetary soft landing control method and system based on reinforcement learning and storage medium
CN114859911A (en) * 2022-04-28 2022-08-05 云南红岭云科技股份有限公司 Four-legged robot path planning method based on DRL
CN116125815A (en) * 2023-02-23 2023-05-16 北京理工大学 Intelligent cooperative control method for small celestial body flexible lander
CN116400589A (en) * 2023-03-06 2023-07-07 北京理工大学 Intelligent control method for asteroid flexible detector of deep reinforcement learning SAC algorithm
CN116627041A (en) * 2023-07-19 2023-08-22 江西机电职业技术学院 Control method for motion of four-foot robot based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Integrated attitude and landing control for quadruped robots in asteroid landing mission scenarios using reinforcement learning;QiJi;ELSEVIER;20221130;第599-610页 *
Reinforcement learning-based stable jump control method for asteroid-exploration quadruped robots;Qi Ji;ELSEVIER;20231031;全文 *
小天体表面移动技术研究进展;于正湜;朱圣英;崔平远;刘延杰;深空探测学报;20170815(04);全文 *
小天体附近的轨道动力学研究综述;于洋;宝音贺西;;深空探测学报;20140615(02);全文 *

Also Published As

Publication number Publication date
CN117631547A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
Sayyad et al. Single-legged hopping robotics research—A review
CN107598897A (en) A kind of method of humanoid robot gait's planning based on human body teaching
CN113821045B (en) Reinforced learning action generating system of leg-foot robot
Chen et al. A trot and flying trot control method for quadruped robot based on optimal foot force distribution
CN110244714A (en) Robot list leg swing phase double-closed-loop control method based on sliding formwork control
Liu et al. Improved RBF network torque control in flexible manipulator actuated by PMAs
Zhang et al. Physics-driven locomotion planning method for a planar closed-loop terrain-adaptive robot
Shao et al. Recent advances on gait control strategies for hydraulic quadruped robot
CN117631547B (en) Landing control method for quadruped robot under irregular weak gravitational field of small celestial body
Luo et al. Prismatic Quasi-Direct-Drives for dynamic quadruped locomotion with high payload capacity
Dadashzadeh et al. Slip-based control of bipedal walking based on two-level control strategy
Dong et al. On-line gait adjustment for humanoid robot robust walking based on divergence component of motion
Masuda et al. Sim-to-real transfer of compliant bipedal locomotion on torque sensor-less gear-driven humanoid
CN115202378A (en) Dynamic walking control method of humanoid robot
Ji et al. Reinforcement learning for collaborative quadrupedal manipulation of a payload over challenging terrain
Nguyen et al. Gait-behavior optimization considering arm swing and toe mechanism for biped walking on rough road
Wang et al. Normalized neural network for energy efficient bipedal walking using nonlinear inverted pendulum model
Xie et al. Online whole-stage gait planning method for biped robots based on improved Variable Spring-Loaded Inverted Pendulum with Finite-sized Foot (VSLIP-FF) model
CN114700955A (en) Whole body motion planning and control method for two-wheeled leg-arm robot
Wang et al. Nao humanoid robot gait planning based on the linear inverted pendulum
Vatavuk et al. Precise jump planning using centroidal dynamics based bilevel optimization
Chen et al. Realization of complex terrain and disturbance adaptation for hydraulic quadruped robot under flying trot gait
Kouchaki et al. Balance control of a humanoid robot using deepreinforcement learning
Kobayashi et al. Optimal use of arm-swing for bipedal walking control
Yu et al. Research on Disturbance of Upright Balance of Biped Humanoid Robot Based on AWPSO-LQR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant