CN117631547A - Landing control method for quadruped robot under irregular weak gravitational field of small celestial body - Google Patents
Landing control method for quadruped robot under irregular weak gravitational field of small celestial body Download PDFInfo
- Publication number
- CN117631547A CN117631547A CN202410112248.0A CN202410112248A CN117631547A CN 117631547 A CN117631547 A CN 117631547A CN 202410112248 A CN202410112248 A CN 202410112248A CN 117631547 A CN117631547 A CN 117631547A
- Authority
- CN
- China
- Prior art keywords
- robot
- training
- reinforcement learning
- celestial body
- landing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000001788 irregular Effects 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 35
- 230000002787 reinforcement Effects 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 14
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 238000004088 simulation Methods 0.000 claims abstract description 10
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 230000005484 gravity Effects 0.000 claims abstract description 8
- 230000001133 acceleration Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 14
- 230000009471 action Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 238000003062 neural network model Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000012800 visualization Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 description 8
- 230000009191 jumping Effects 0.000 description 5
- 210000001503 joint Anatomy 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 241000238631 Hexapoda Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Feedback Control In General (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, which belongs to the technical field of robots and comprises the following steps: step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information; step 2, importing a robot model description file into a dynamics simulation engine; step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; and 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling the robot to finish the aerial gesture adjustment and landing through the trained controller. The invention provides a landing control method of a quadruped robot under an irregular weak gravitational field of a small celestial body, which can effectively avoid the problem that a robot dynamics model is difficult to accurately establish under the irregular gravitational field of the small celestial body.
Description
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a landing control method of a quadruped robot under a small celestial body irregular weak gravity field.
Background
After moon detection and Mars detection, the detection of planets, comets and other small celestial bodies is a research hot spot in the current international deep space detection field, and is also an important development direction of subsequent deep space detection in China.
In order to more accurately understand the information of the shape, size, mineral distribution, soil mechanics, surface topography, dust dynamics, etc., it is necessary to conduct a multi-point long-term detection on the surface of the asteroid. Because the gravitational acceleration of the surface of the celestial body is small, the traditional wheel type detection vehicle and other schemes are not suitable for the celestial body. However, the jumping walking scheme has more advantages in a weak gravity field environment, and the detector can easily jump over obstacles with sizes tens of times of the obstacles in the jumping movement process, so that the jumping walking scheme is suitable for the ground environment with complex asteroid surfaces.
Six-legged robots and four-legged robots are mature in the existing legged robots. The hexapod robot has high bearing capacity, good stability and simple control method, but is not suitable for deep space exploration due to the large dead weight. The quadruped robot has moderate bearing capacity and stability, and the jumping motion and posture adjustment mode of the quadruped robot imitating the natural world is more suitable for the weak gravitational field environment of the small celestial body.
In the first deployment of the robot and the landing process of each jump, each joint of the robot needs to be precisely controlled so as to achieve the purposes of soft landing and no secondary jump. The traditional four-foot robot control is a model-based control scheme, an accurate dynamic model is difficult to build in the environment of the small celestial body irregular gravitational field, and a model-free control algorithm based on reinforcement learning has better adaptability.
Disclosure of Invention
The invention aims to provide a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, and solves the problems of high calculation force requirement, dependence on an accurate model and poor universality of the control method in the prior art.
In order to achieve the above purpose, the invention provides a landing control method of a quadruped robot under a small celestial body irregular weak gravity field, comprising the following steps:
step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; the reinforcement learning environment comprises an action spaceObservation space->Bonus function->Training initialization setting;
and 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling the robot to finish the aerial gesture adjustment and landing through the trained controller.
Preferably, the action spaceThe instantaneous control quantity of each joint of the four-foot robot is represented; action space->From the positions of the joints->And maximum output moment of each joint->Composition is prepared.
Preferably, the observation spaceThe method comprises the steps of including environment information and robot state information in the current state; the environment information in the current state is the gravitational field information of the celestial body, and the robot state information comprises the positions of all joints, the rotation angular velocity of all joints, the attitude angle of the body, the body velocity, the rotation angular velocity of the body and the body height.
Preferably, the reward functionThe reward function is mapped by the values of the k task objects +.>And bonus weight->The sum of products forms, the task targets comprise a main line task target and a branch line task target which are expected to be achieved by the controller, the reward weight of the main line task target is higher than that of the branch line task target, and the magnitude of numerical mapping of the task targets, the difficulty of achieving the task and the expected time sequence of the task are considered when the reward weight is designed.
Preferably, the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed.
Preferably, an Actor-Critic architecture is adopted in a controller neural network structure designed based on a near-end strategy optimization reinforcement learning algorithm, wherein an action network and an evaluation network both adopt the same neural network model, each of the neural network comprises a plurality of perceptron neural networks with at least two hidden layers, each layer comprises 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh.
Preferably, training is performed using a reinforcement learning library pytorch according to the set training hyper-parameters, and the training process is recorded using reinforcement learning visualization tools tensorboard.
Preferably, after the training is finished, a well-trained control model is tested by using a randomly generated initialization environment, and if the model control effect does not reach the expected value, the reward function and the super-parameters are adjusted and retrained by observing the change curves of the q value and the loss value in the training process recorded by the tensorboard.
Therefore, the landing control method of the quadruped robot under the irregular weak gravitational field of the celestial body has the following beneficial effects:
(1) The conventional detector requires the installation of a reaction flywheel as a posture adjustment device. The four-foot robot is utilized to change the position of the mass center of the robot by swinging the legs so as to generate the gesture adjustment torque, an additional gesture adjustment device is not needed, and the emission load in deep space detection is saved;
(2) The model-free control algorithm based on reinforcement learning effectively avoids the problem that the robot dynamics model is difficult to accurately establish under the irregular attraction field of the celestial body. The requirement on the computational power of the controller when the trained model is applied is far lower than that of the traditional MPC and other algorithms based on model control, and the control model can stably finish the tasks of the four-foot robot such as air gesture adjustment and soft landing.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flowchart of a training iteration of a reinforcement learning model based on near-end strategy optimization;
FIG. 2 is a schematic view of a four-legged robot according to the present invention for adjusting the posture of a body by swinging legs;
FIG. 3 is a graph showing the body posture test effect of the four-foot robot regulated by the control model trained by the invention.
Detailed Description
The following detailed description of the embodiments of the invention, provided in the accompanying drawings, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-3, a landing control method for a quadruped robot under a small celestial body irregular weak gravity field includes the following steps:
step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
step 3, establishing a reinforcement learning environment based on a frame of a reinforcement learning frame proposed by the OpenAI company, wherein the environment comprises an action space, an observation space, a reward function and training initialization settings, and the concrete contents are as follows:
3-1: defining a motion space of a quadruped robotTo represent the instantaneous control quantity of each joint of the robot. Action spaceFrom the positions of the joints->Maximum output moment of each joint->Composition;
3-2: definition of the four-legged robot observation spaceThe robot control method comprises the steps of including environment information and robot state information in the current state. In the invention, the environment information is the information of the gravitational field of the celestial body, and the machineThe human state information comprises joint positions, joint rotation angular velocities, body attitude angles, body velocities, body rotation angular velocities and body heights;
3-3: designing objective rewarding function according to task contentThe rewarding function is composed of the sum of the products of the numerical mapping of k task targets and the rewarding weights; the task targets comprise main line task targets and auxiliary training branch line task targets which are expected to be achieved by the controller, and the reward weight of the main line task targets is higher than that of the branch line task targets in general, and the size of numerical mapping of the task targets, the difficulty of achieving the tasks and the expected time sequence of the tasks are considered when the reward weight is designed;
3-4: the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed;
and 4, designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm. The invention adopts an Actor-Critic architecture, wherein an action network and an evaluation network adopt the same neural network model, each of which consists of at least two layers of hidden layers of multi-layer perceptron neural networks, each layer contains 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh;
step 5, setting training super parameters as shown in table 1, wherein specific parameter values are required to be set according to specific working conditions;
table 1 near-end strategy optimization reinforcement learning algorithm training hyper-parameters
;
And 6, training by using a reinforcement learning training framework pyrach according to the training super parameters set in the step 5, and recording a training process by using a tensorboard.
And 7, testing the trained control model by using the randomly generated initialization environment after training. If the model control effect does not reach the expected value, the reward function and the super-parameters need to be adjusted and retrained by observing the change curves of the q value and the loss value in the training process recorded by the tensorboard in the step 6.
Examples
A simulation working condition and a four-foot robot control model are built in a dynamics simulation engine, a neural network model of a controller is trained through reinforcement learning, and an iterative flow chart of the reinforcement learning model training is shown in fig. 1. The neural network model can take the observation information of the robot to the environment and the state of the robot as input, and output the current optimal control quantity to achieve the target effect. The invention mainly aims at gesture adjustment and landing control in the process of robot deployment and jumping, and specifically comprises the following steps:
in the aerial gesture adjusting part, according to the law of conservation of angular momentum, the quadruped robot can change the position of the mass center of the robot body by swinging the legs to generate a rotating moment so as to adjust the gesture angle of the robot body, as shown in fig. 2. The method ensures that the four-foot robot can complete gesture control only by relying on the motor of the leg joint of the robot, and a reaction flywheel is not required to be additionally arranged.
In the landing control part, the kinetic energy of the robot during landing is absorbed by accurately controlling the positions and output torque of each joint of the four-foot robot, so that the joint motor plays a role in landing buffer.
The gesture adjustment and landing control are combined as a whole to train, so that the whole landing process is smoother and more natural. In short, the robot does not adjust the gesture to the ideal landing gesture set by human, but automatically adjusts the gesture to the gesture most suitable for landing according to the self information and the environmental information, and then lands, as shown in fig. 3, the body gesture test effect of the quadruped robot is adjusted by the control model obtained by training.
Therefore, the landing control method of the quadruped robot under the irregular weak gravitational field of the small celestial body is adopted, so that the quadruped robot can complete gesture control only by means of the leg joint motor of the quadruped robot, and a reaction flywheel is not required to be additionally arranged; the kinetic energy of the robot during landing can be absorbed by accurately controlling the positions and output torque of each joint of the four-foot robot, so that the joint motor plays a role in landing buffer. The complete gesture adjustment and landing are realized by adopting a model-free control algorithm based on near-end strategy optimization reinforcement learning. The invention can effectively avoid the problem that the robot dynamics model is difficult to accurately establish under the irregular gravitational field of the celestial body, and reduces the calculation force requirement on the controller.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.
Claims (3)
1. The landing control method of the quadruped robot under the irregular weak gravitational field of the celestial body is characterized by comprising the following steps:
step 1, establishing an gravitational acceleration model in a dynamics simulation engine according to gravitational field information of a landing target celestial body as environment information;
step 2, importing a robot model description file into a dynamics simulation engine, wherein the file contains constraint information of a robot multi-rigid body model, self mass, inertia physical information, and movement limit and maximum movement speed of each joint;
step 3, establishing a reinforcement learning environment based on the gym, and designing a controller neural network structure based on a near-end strategy optimization reinforcement learning algorithm; the reinforcement learning environment comprises an action spaceObservation space->Bonus function->Training initialization setting;
step 4, setting training super parameters, training a controller through the established reinforcement learning environment and a designed controller neural network structure, and finally controlling a robot to finish air gesture adjustment and landing through the trained controller;
action spaceThe instantaneous control quantity of each joint of the four-foot robot is represented; action space->From the positions of the joints->And maximum output moment of each joint->Composition;
observation spaceThe method comprises the steps of including environment information and robot state information in the current state; the environment information in the current state is the gravitational field information of the celestial body, and the robot state information comprises the position of each joint, the rotation angular velocity of each joint, the attitude angle of the body, the body velocity, the rotation angular velocity of the body and the body height;
reward functionThe reward function is mapped by the values of the k task objects +.>And bonus weight->The product sum is formed, the task targets comprise main line task targets and auxiliary training branch line task targets, the reward weight of the main line task targets is higher than that of the branch line task targets, and the magnitude of numerical mapping of the task targets, the difficulty of achieving the tasks and the expected time sequence of the tasks are considered when the reward weight is designed;
the training initialization setting comprises gravitational field parameters, initial pose of the robot and initial speed;
the controller neural network structure based on the near-end strategy optimization reinforcement learning algorithm is designed to adopt an Actor-Critic architecture, wherein an action network and an evaluation network both adopt the same neural network model, each of the neural network comprises a plurality of perceptron neural networks with at least two hidden layers, each layer comprises 128 neurons, and an activation function adopts a hyperbolic tangent function Tanh.
2. The landing control method of a quadruped robot in a small celestial body irregular weak gravity field of claim 1, wherein the landing control method comprises the following steps: training is performed by using a reinforcement learning library pytorch according to the set training hyper-parameters, and a training process is recorded by using reinforcement learning visualization tool tensorboard.
3. The landing control method of the quadruped robot in the small celestial body irregular weak gravity field according to claim 2, wherein the landing control method comprises the following steps: after training, testing a trained control model by using a randomly generated initialization environment, and if the model control effect does not reach the expected value, observing a q value and a loss value change curve recorded by a tensorboard in the training process, wherein the q value is a desired value of rewards when actions are taken in the current state, and the loss value is a loss value, adjusting a rewarding function and an ultra-parameter, and retraining.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410112248.0A CN117631547B (en) | 2024-01-26 | 2024-01-26 | Landing control method for quadruped robot under irregular weak gravitational field of small celestial body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410112248.0A CN117631547B (en) | 2024-01-26 | 2024-01-26 | Landing control method for quadruped robot under irregular weak gravitational field of small celestial body |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117631547A true CN117631547A (en) | 2024-03-01 |
CN117631547B CN117631547B (en) | 2024-04-26 |
Family
ID=90036049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410112248.0A Active CN117631547B (en) | 2024-01-26 | 2024-01-26 | Landing control method for quadruped robot under irregular weak gravitational field of small celestial body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117631547B (en) |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RO125970B1 (en) * | 2010-01-21 | 2019-03-29 | Institutul De Mecanica Solidelor Al Academiei Române | Method and device for the dynamic control of a walking robot |
CN102968124B (en) * | 2012-11-29 | 2015-04-15 | 北京理工大学 | Model uncertain boundary-based planet landing trajectory tracking robust control method |
CN107065571A (en) * | 2017-06-06 | 2017-08-18 | 上海航天控制技术研究所 | A kind of objects outside Earth soft landing Guidance and control method based on machine learning algorithm |
CN108537404B (en) * | 2018-03-06 | 2021-10-22 | 中国人民解放军63920部队 | Extraterrestrial celestial body detection sampling area collectability assessment method, medium and equipment |
AU2018101292A4 (en) * | 2018-09-05 | 2018-10-11 | He, Zhenguang Mr | A segmented head-body hexapod robot |
CN111762339B (en) * | 2020-06-30 | 2022-01-11 | 哈尔滨工业大学 | Online machine learning control method for vehicle wheels of star probe vehicle |
CN113326872A (en) * | 2021-05-19 | 2021-08-31 | 广州中国科学院先进技术研究所 | Multi-robot trajectory planning method |
CN113821057B (en) * | 2021-10-14 | 2023-05-30 | 哈尔滨工业大学 | Planetary soft landing control method and system based on reinforcement learning and storage medium |
CN114859911A (en) * | 2022-04-28 | 2022-08-05 | 云南红岭云科技股份有限公司 | Four-legged robot path planning method based on DRL |
CN116125815A (en) * | 2023-02-23 | 2023-05-16 | 北京理工大学 | Intelligent cooperative control method for small celestial body flexible lander |
CN116400589A (en) * | 2023-03-06 | 2023-07-07 | 北京理工大学 | Intelligent control method for asteroid flexible detector of deep reinforcement learning SAC algorithm |
CN116627041B (en) * | 2023-07-19 | 2023-09-29 | 江西机电职业技术学院 | Control method for motion of four-foot robot based on deep learning |
-
2024
- 2024-01-26 CN CN202410112248.0A patent/CN117631547B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN117631547B (en) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113821045B (en) | Reinforced learning action generating system of leg-foot robot | |
CN107598897A (en) | A kind of method of humanoid robot gait's planning based on human body teaching | |
Chen et al. | A trot and flying trot control method for quadruped robot based on optimal foot force distribution | |
CN108009680A (en) | Humanoid robot gait's planing method based on multi-objective particle swarm algorithm | |
CN113190029A (en) | Adaptive gait autonomous generation method of quadruped robot based on deep reinforcement learning | |
CN115202378A (en) | Dynamic walking control method of humanoid robot | |
Shao et al. | Recent advances on gait control strategies for hydraulic quadruped robot | |
Zhang et al. | Physics-driven locomotion planning method for a planar closed-loop terrain-adaptive robot | |
CN113568422B (en) | Four-foot robot control method based on model predictive control optimization reinforcement learning | |
Luo et al. | Prismatic Quasi-Direct-Drives for dynamic quadruped locomotion with high payload capacity | |
Sun et al. | Dynamically stable walk control of biped humanoid on uneven and inclined terrain | |
CN117631547B (en) | Landing control method for quadruped robot under irregular weak gravitational field of small celestial body | |
Dong et al. | On-line gait adjustment for humanoid robot robust walking based on divergence component of motion | |
Wu et al. | Highly robust running of articulated bipeds in unobserved terrain | |
Masuda et al. | Sim-to-real transfer of compliant bipedal locomotion on torque sensor-less gear-driven humanoid | |
Wang et al. | Normalized neural network for energy efficient bipedal walking using nonlinear inverted pendulum model | |
Xie et al. | Online whole-stage gait planning method for biped robots based on improved Variable Spring-Loaded Inverted Pendulum with Finite-sized Foot (VSLIP-FF) model | |
Nguyen et al. | Gait-behavior optimization considering arm swing and toe mechanism for biped walking on rough road | |
Vatavuk et al. | Precise jump planning using centroidal dynamics based bilevel optimization | |
CN114700955A (en) | Whole body motion planning and control method for two-wheeled leg-arm robot | |
Che et al. | Kinematics analysis of leg configuration of an ostrich bionic biped robot | |
Wang et al. | Nao humanoid robot gait planning based on the linear inverted pendulum | |
Zhang et al. | Rigid-flexible coupling dynamic modeling and performance analysis of a bioinspired jumping robot with a six-bar leg mechanism | |
Cao et al. | Mechanism design and dynamic switching modal control of the wheel-legged separation quadruped robot | |
Yu et al. | Research on Disturbance of Upright Balance of Biped Humanoid Robot Based on AWPSO-LQR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |