CN112405568A - Humanoid robot falling prediction method - Google Patents

Humanoid robot falling prediction method Download PDF

Info

Publication number
CN112405568A
CN112405568A CN202011122471.1A CN202011122471A CN112405568A CN 112405568 A CN112405568 A CN 112405568A CN 202011122471 A CN202011122471 A CN 202011122471A CN 112405568 A CN112405568 A CN 112405568A
Authority
CN
China
Prior art keywords
humanoid robot
falling
data
robot
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011122471.1A
Other languages
Chinese (zh)
Inventor
陈启军
黄振港
刘成菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011122471.1A priority Critical patent/CN112405568A/en
Publication of CN112405568A publication Critical patent/CN112405568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0075Means for protecting the manipulator from its environment or vice versa

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a humanoid robot falling prediction method, which comprises the following steps: in a set prediction interval time, sequentially and respectively acquiring a real falling result, inertial sensor acquisition data and mass center position data corresponding to the humanoid robot according to the sampling times, and calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions; then, carrying out data preprocessing to obtain a robot motion characteristic matrix containing 8-dimensional data; repeating the steps to obtain real falling results and a robot motion characteristic matrix within a plurality of prediction intervals; sequentially taking a plurality of robot motion characteristic matrixes as input of a neural network model, and training the neural network model by combining a real falling result to obtain a humanoid robot falling prediction model; a fall prediction model is used for fall prediction. Compared with the prior art, the method can accurately predict whether the robot falls or not and the falling direction, and can be well suitable for the falling prediction of different robots.

Description

Humanoid robot falling prediction method
Technical Field
The invention relates to the technical field of robot fall detection, in particular to a humanoid robot fall prediction method.
Background
The humanoid robot has flexible motion and can be widely applied to various grounds and scenes; the shape and the structure of the humanoid robot are similar to those of human beings, the humanoid robot is easy to generate the sense of closeness when interacting with the human beings, is easy to adapt to the existing environment of the human beings, replaces the human beings to finish a plurality of mechanization or dangerous works, and has huge application potential and research value. The basis of the humanoid robot for completing various tasks is that the feet of the humanoid robot need to be ensured to walk stably to avoid falling of the robot, so that the falling of the humanoid robot needs to be predicted so as to make corresponding decisions and actions according to prediction results, such as making emergency gait or slow down actions, and the falling phenomenon of the robot is effectively prevented.
Currently, the commonly used humanoid robot fall prediction methods are divided into two categories:
(1) modeling based on robot dynamics: according to the body structure of the robot, dynamics analysis and modeling are carried out, then the current motion state of the robot, such as the position of the center of mass, the speed, the angular velocity and the like, is obtained according to the sensor data, the motion change condition of the robot at the subsequent moment is deduced and predicted, and whether the robot falls down or not is predicted. However, the humanoid robot is a highly complex time-lag and nonlinear system, and simplification and linearization are necessarily required when a model is established, so errors caused by linearly simplified models inevitably lead to prediction errors. In addition, different robots have different body structures and characteristics, gait generation methods are different, and the prediction effects of the same model on different robots are greatly different.
(2) Machine learning with sensor data: sensor data, such as pressure sensors, inertial sensors and the like, are input into a machine learning model, such as a regression model, a clustering model and the like, training and testing are carried out, and the robot falls and does not fall generally through prediction. However, the method has no good robustness to the distortion and error of sensor data, and is easy to predict errors; in addition, generally, only whether the robot falls or not can be predicted, and the error rate of predicting the falling direction is high.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for predicting the falling of a humanoid robot, so as to accurately predict the falling of the humanoid robot and predict the falling direction.
The purpose of the invention can be realized by the following technical scheme: a humanoid robot fall prediction method comprises the following steps:
s1, setting the prediction interval time and the sampling times in the single prediction interval time;
s2, sequentially and respectively acquiring a real falling result, inertial sensor acquisition data and centroid position data corresponding to the humanoid robot according to the sampling times in the prediction interval time, and calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions;
s3, preprocessing the data acquired in the step S2 and the data obtained through calculation to obtain a robot motion characteristic matrix containing 8-dimensional data;
s4, repeating the steps S2 and S3 to obtain real falling results corresponding to the humanoid robot in a plurality of prediction interval time and a robot motion characteristic matrix;
s5, sequentially taking the motion characteristic matrixes of the robot in the step S4 as the input of a neural network model, and training the neural network model by combining the corresponding real falling result to obtain a humanoid robot falling prediction model;
s6, acquiring data collected by an inertial sensor and centroid position data corresponding to the humanoid robot within the current prediction interval time, calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions, and combining the data preprocessing mode in the step S3 to obtain a robot motion characteristic matrix corresponding to the humanoid robot within the current prediction interval time;
and S7, inputting the robot motion characteristic matrix corresponding to the humanoid robot in the current prediction interval time into the humanoid robot falling prediction model, and outputting to obtain a current falling prediction result.
Further, the prediction interval time is set between 0.1s and 0.3 s.
Further, the number of samples in the single prediction interval is not less than 15.
Further, the step S2 specifically includes the following steps:
s21, respectively acquiring a real falling result, inertial sensor acquisition data and centroid position data corresponding to the humanoid robot according to the sampling times within the prediction interval time;
and S22, calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions according to the data acquired by the inertial sensor and the mass center position data.
Further, the inertial sensor data acquisition comprises the acceleration of the humanoid robot in the x and y directions, the angular speed around the x and y directions and the trunk attitude angle around the x and y directions.
Further, the data of the centroid position of the humanoid robot is specifically obtained by calculating the centroid position of each connecting rod of the humanoid robot:
c=(cx,cy,cz)
Figure BDA0002732498940000031
Figure BDA0002732498940000032
Figure BDA0002732498940000033
Figure BDA0002732498940000034
wherein c is the centroid position of the humanoid robotx、cyAnd czRespectively the centroid positions of the humanoid robot in the x, y and z directions,
Figure BDA0002732498940000035
and
Figure BDA0002732498940000036
the mass center positions of the ith connecting rod of the humanoid robot in the x, y and z directions respectively, N is the total number of the connecting rods of the humanoid robot, miMass of the ith connecting rod, M is a dummyTotal mass of the robot.
Further, the ZMP of the humanoid robot in the x direction is as follows:
Figure BDA0002732498940000037
wherein the content of the first and second substances,
Figure BDA0002732498940000038
the acceleration of the humanoid robot in the x direction is shown, and g is the gravity acceleration;
the ZMP of the humanoid robot in the y direction is as follows:
Figure BDA0002732498940000039
wherein the content of the first and second substances,
Figure BDA00027324989400000310
the acceleration of the humanoid robot in the y direction is adopted.
Further, the specific process of data preprocessing in step S3 is as follows:
s31, screening the same type of data corresponding to all sampling times within a single prediction interval time to obtain the maximum value and the minimum value of the data;
s32, calculating the preprocessed data according to the following formula:
Figure BDA00027324989400000311
wherein S isjFor the preprocessed data corresponding to the jth sample in a single prediction interval,
Figure BDA00027324989400000312
sampling the j th time of the original data in a single prediction interval, n is the sampling time in the single prediction interval, SmaxIs the maximum value, S, of the n sampled data within a single prediction intervalminIs the minimum of the n sampled data within a single prediction interval.
Further, the motion characteristic matrix of the robot is specifically as follows:
Figure BDA00027324989400000313
Figure BDA0002732498940000041
where n is the number of samples in a single prediction interval,
Figure BDA0002732498940000042
and
Figure BDA0002732498940000043
are respectively a data set after acceleration preprocessing of the humanoid robot in the x direction and the y direction, wxAnd wyAre respectively a data set after angular velocity preprocessing of the humanoid robot around the x direction and the y direction, thetaxAnd thetayAre respectively a data set, p, of the humanoid robot after the preprocessing of the body attitude angles around the x direction and the y directionxAnd pyThe data sets are respectively ZMP preprocessed data sets of the humanoid robot in the x direction and the y direction.
Further, the output of the humanoid robot fall prediction model is specifically a vector containing 9-dimensional data:
k=[k1 … k9]T
Figure BDA0002732498940000044
wherein k isrTo the occurrence probability of fall results belonging to the r-th category, the fall results of categories 1 to 9 respectively correspond to: non-falling, falling forward, falling backward, falling left, falling right, falling left, and falling right。
Compared with the prior art, the invention has the following advantages:
the invention is based on data and centroid position data acquired by an inertial sensor on a humanoid robot, and combines a neural network model to train to obtain a humanoid robot falling prediction model, and the 8-dimensional data corresponding to different sampling times in prediction interval time are preprocessed, so that the data can be unified and standardized, thereby being well suitable for the training of the neural network model and ensuring the accuracy of subsequent prediction results.
In the process of training to obtain the falling prediction model, the method does not need to be adjusted according to the body structure or the gait generation method of the humanoid robot, and the falling prediction model can be constructed only by sampling data and then sequentially carrying out data preprocessing and training.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a coordinate system of a humanoid robot in an embodiment;
FIG. 3 is a diagram illustrating a neural network model according to an embodiment;
FIG. 4a is a schematic diagram of the humanoid robot in the embodiment when being impacted by external force;
fig. 4b is a schematic diagram of the falling result of the humanoid robot in the embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, a humanoid robot fall prediction method includes the following steps:
s1, setting the prediction interval time and the sampling times in the single prediction interval time;
s2, sequentially and respectively acquiring a real falling result, inertial sensor acquisition data and centroid position data corresponding to the humanoid robot according to the sampling times in the prediction interval time, and calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions;
s3, preprocessing the data acquired in the step S2 and the data obtained through calculation to obtain a robot motion characteristic matrix containing 8-dimensional data;
s4, repeating the steps S2 and S3 to obtain real falling results corresponding to the humanoid robot in a plurality of prediction interval time and a robot motion characteristic matrix;
s5, sequentially taking the motion characteristic matrixes of the robot in the step S4 as the input of a neural network model, and training the neural network model by combining the corresponding real falling result to obtain a humanoid robot falling prediction model;
s6, acquiring data collected by an inertial sensor and centroid position data corresponding to the humanoid robot within the current prediction interval time, calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions, and combining the data preprocessing mode in the step S3 to obtain a robot motion characteristic matrix corresponding to the humanoid robot within the current prediction interval time;
and S7, inputting the robot motion characteristic matrix corresponding to the humanoid robot in the current prediction interval time into the humanoid robot falling prediction model, and outputting to obtain a current falling prediction result.
The specific process of the embodiment applying the method is as follows:
firstly, the method comprises the following steps: determining a prediction interval T and the number of samples within a single prediction interval time: the prediction interval is the time interval between two adjacent tumble predictions, the prediction interval T is not suitable to be large in value, the value is preferably 0.1-0.3 seconds, in the embodiment, the prediction interval T is determined to be 0.2 seconds, and the sampling frequency in a single prediction interval is 20 times;
II, secondly: obtaining an inertial sensor value: acceleration of the robot in the x and y directions is obtained from the inertial sensor of the humanoid robot
Figure BDA0002732498940000051
Angular velocity (w) about both x, y directionsx,wy) Torso attitude angle (theta) around both x and y directionsx,θy) The humanoid robot coordinate system is shown in fig. 2, and in addition, a real falling result needs to be obtained;
thirdly, the method comprises the following steps: calculating the ZMP: calculating the ZMP (p) of the humanoid robot in the x and y directionsx,py) ZMP can be determined by the robot centroid position (c)x,cy,cz) Acceleration of
Figure BDA0002732498940000061
And calculating the gravity acceleration g to obtain:
Figure BDA0002732498940000062
Figure BDA0002732498940000063
wherein, the center of mass position c ═ c of the robotx,cy,cz) The mass center position of each connecting rod of the robot can be obtained as follows:
Figure BDA0002732498940000064
in the formula (I), the compound is shown in the specification,
Figure BDA0002732498940000065
the mass center position of the ith connecting rod under the coordinate system of the humanoid robot is defined, N is the total number of the connecting rods of the humanoid robot, miIs the mass of the ith connecting rod,
Figure BDA0002732498940000066
the total mass of the humanoid robot;
fourthly, the method comprises the following steps: data preprocessing, forming an input matrix, and marking: within the prediction interval T, the 8-dimensional data for representing the motion state of the robot is sampled n timesFor example: by using
Figure BDA0002732498940000067
Represents the x-direction acceleration of the j-th sample,
Figure BDA0002732498940000068
representing n x 1 dimensional vectors formed by the x-direction acceleration of n samples, and having the following data preprocessing:
Figure BDA0002732498940000069
in the formula (I), the compound is shown in the specification,
Figure BDA00027324989400000610
seed of a plant
Figure BDA00027324989400000611
Similar notation and data preprocessing are used for other 7-dimensional data for the maximum value and the minimum value in the x-direction acceleration data obtained by all sampling, so that the previously obtained 8-dimensional data is represented by a matrix as the input of the neural network, and the input matrix a can be represented as:
Figure BDA00027324989400000612
marking according to whether the robot falls or not in the next prediction interval and the falling direction, wherein the marks are 9 types: non-falling, falling forward, falling backward, falling left, falling right, falling left, and falling backward;
in this embodiment, 20 times of sampling are performed within 0.2 second of the prediction interval T, and the obtained input matrix a is a 20 × 8 two-dimensional matrix. Data samples and corresponding labels for input matrix a are given below:
a sample of non-falling model input data is:
Figure BDA00027324989400000613
Figure BDA0002732498940000071
and (3) representing the marking result by adopting one-hot coding, namely the mathematical expression form of the final output of the neural network model is a vector k of 9 x 1:
k=[k1 … k9]T
and is
Figure BDA0002732498940000072
krThe actual meaning of (1) is to indicate the probability of belonging to class r results, class 1 to 9 corresponding respectively to: non-falling, falling forward, falling backward, falling left, falling right, falling forward and left, falling forward and right, falling backward and left, and falling backward and right.
Corresponding to the neural network model input data example given above, an example of non-falling model output data (result label) is:
k=[1 0 0 0 0 0 0 0 0]T
fifthly: selecting a neural network model, and training: selecting a proper neural network model, inputting a robot motion characteristic matrix as a model, outputting a marking result as the model, performing training and comparison for multiple times, and determining the final weight parameter of the neural network model, wherein the structure of the selected neural network model is shown in FIG. 3.
Sixthly, the method comprises the following steps: using the trained model, fall prediction is performed: after the model training is completed and the final weight parameters are determined, in the robot walking process, similarly, the method in the second step and the third step is used for obtaining 8-dimensional data and forming the input matrix A20*8Inputting the neural network model, wherein the result obtained by the model output is a prediction result, namely whether the robot falls down and the falling direction are predicted:
FIG. 4a shows the robot walkingThe process is impacted by external force and is difficult to keep balance, and at the moment, 8-dimensional data are obtained by using the method in the second step and the third step, and the input matrix A is formed20*8Comprises the following steps:
Figure BDA0002732498940000081
inputting the trained deep neural network model, and outputting the following results:
k=[0 0 0.86 0 0.05 0 0 0 0.09]T
according to the output result, the probability that the robot falls backwards is the largest, therefore, the robot will fall backwards given the prediction result, and fig. 4b shows that the robot falls backwards after the robot falls, which is consistent with the prediction result.
In conclusion, the robot can predict whether the robot falls or not and can also predict the falling direction;
the method integrates multiple collected data to carry out comprehensive calculation and prediction, and preprocesses the collected data to reduce the sensitivity to the distortion and error of the sensor data, and simultaneously, the preprocessed data can be well integrated into the training of a neural network model, so that the accuracy of the follow-up tumble prediction is ensured to be higher;
the method is universal and feasible, and can be used for rapidly obtaining a falling prediction model and performing falling prediction on various humanoid robots.

Claims (10)

1. A humanoid robot fall prediction method is characterized by comprising the following steps:
s1, setting the prediction interval time and the sampling times in the single prediction interval time;
s2, sequentially and respectively acquiring a real falling result, inertial sensor acquisition data and centroid position data corresponding to the humanoid robot according to the sampling times in the prediction interval time, and calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions;
s3, preprocessing the data acquired in the step S2 and the data obtained through calculation to obtain a robot motion characteristic matrix containing 8-dimensional data;
s4, repeating the steps S2 and S3 to obtain real falling results corresponding to the humanoid robot in a plurality of prediction interval time and a robot motion characteristic matrix;
s5, sequentially taking the motion characteristic matrixes of the robot in the step S4 as the input of a neural network model, and training the neural network model by combining the corresponding real falling result to obtain a humanoid robot falling prediction model;
s6, acquiring data collected by an inertial sensor and centroid position data corresponding to the humanoid robot within the current prediction interval time, calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions, and combining the data preprocessing mode in the step S3 to obtain a robot motion characteristic matrix corresponding to the humanoid robot within the current prediction interval time;
and S7, inputting the robot motion characteristic matrix corresponding to the humanoid robot in the current prediction interval time into the humanoid robot falling prediction model, and outputting to obtain a current falling prediction result.
2. The humanoid robot fall prediction method of claim 1, wherein the prediction interval time is set between 0.1s and 0.3 s.
3. The humanoid robot fall prediction method of claim 1, wherein the number of samples within the single prediction interval is not less than 15.
4. The method for predicting a humanoid robot fall as recited in claim 1, wherein the step S2 specifically comprises the steps of:
s21, respectively acquiring a real falling result, inertial sensor acquisition data and centroid position data corresponding to the humanoid robot according to the sampling times within the prediction interval time;
and S22, calculating to obtain a zero moment point ZMP of the humanoid robot in the x and y directions according to the data acquired by the inertial sensor and the mass center position data.
5. The method as claimed in claim 4, wherein the data collected by the inertial sensor includes accelerations of the humanoid robot in x and y directions, angular velocities around the x and y directions, and a trunk attitude angle around the x and y directions.
6. The method for predicting the fall of the humanoid robot as claimed in claim 5, wherein the data of the centroid position of the humanoid robot is specifically obtained by calculating the centroid position of each connecting rod of the humanoid robot:
c=(cx,cy,cz)
Figure FDA0002732498930000021
Figure FDA0002732498930000022
Figure FDA0002732498930000023
Figure FDA0002732498930000024
wherein c is the centroid position of the humanoid robotx、cyAnd czRespectively the centroid positions of the humanoid robot in the x, y and z directions,
Figure FDA0002732498930000025
and
Figure FDA0002732498930000026
the mass center positions of the ith connecting rod of the humanoid robot in the x, y and z directions respectively, N is the total number of the connecting rods of the humanoid robot, miThe mass of the ith connecting rod is, and M is the total mass of the humanoid robot.
7. The humanoid robot fall prediction method of claim 6, wherein a ZMP of the humanoid robot in an x direction is:
Figure FDA0002732498930000027
wherein the content of the first and second substances,
Figure FDA0002732498930000028
the acceleration of the humanoid robot in the x direction is shown, and g is the gravity acceleration;
the ZMP of the humanoid robot in the y direction is as follows:
Figure FDA0002732498930000029
wherein the content of the first and second substances,
Figure FDA00027324989300000210
the acceleration of the humanoid robot in the y direction is adopted.
8. The method for predicting the fall of the humanoid robot as claimed in claim 5, wherein the specific process of the data preprocessing in the step S3 is as follows:
s31, screening the same type of data corresponding to all sampling times within a single prediction interval time to obtain the maximum value and the minimum value of the data;
s32, calculating the preprocessed data according to the following formula:
Figure FDA0002732498930000031
wherein S isjFor the preprocessed data corresponding to the jth sample in a single prediction interval,
Figure FDA0002732498930000032
sampling the j th time of the original data in a single prediction interval, n is the sampling time in the single prediction interval, SmaxIs the maximum value, S, of the n sampled data within a single prediction intervalminIs the minimum of the n sampled data within a single prediction interval.
9. The method for predicting robot-simulated falls according to claim 8, wherein the robot motion feature matrix is specifically:
Figure FDA0002732498930000033
where n is the number of samples in a single prediction interval,
Figure FDA0002732498930000034
and
Figure FDA0002732498930000035
are respectively a data set after acceleration preprocessing of the humanoid robot in the x direction and the y direction, wxAnd wyAre respectively a data set after angular velocity preprocessing of the humanoid robot around the x direction and the y direction, thetaxAnd thetayAre respectively a data set, p, of the humanoid robot after the preprocessing of the body attitude angles around the x direction and the y directionxAnd pyThe data sets are respectively ZMP preprocessed data sets of the humanoid robot in the x direction and the y direction.
10. The method as claimed in claim 1, wherein the output of the humanoid robot fall prediction model is a vector containing 9-dimensional data:
k=[k1 … k9]T
Figure FDA0002732498930000036
wherein k isrTo the occurrence probability of fall results belonging to the r-th category, the fall results of categories 1 to 9 respectively correspond to: non-falling, falling forward, falling backward, falling left, falling right, falling forward and left, falling forward and right, falling backward and left, and falling backward and right.
CN202011122471.1A 2020-10-20 2020-10-20 Humanoid robot falling prediction method Pending CN112405568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011122471.1A CN112405568A (en) 2020-10-20 2020-10-20 Humanoid robot falling prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011122471.1A CN112405568A (en) 2020-10-20 2020-10-20 Humanoid robot falling prediction method

Publications (1)

Publication Number Publication Date
CN112405568A true CN112405568A (en) 2021-02-26

Family

ID=74841425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011122471.1A Pending CN112405568A (en) 2020-10-20 2020-10-20 Humanoid robot falling prediction method

Country Status (1)

Country Link
CN (1) CN112405568A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407611A (en) * 2014-09-30 2015-03-11 同济大学 Humanoid robot stable waling control method
KR20170010576A (en) * 2015-07-20 2017-02-01 주식회사 로보테크 Robot and apparatus for contolling motion of robot
CN104217107B (en) * 2014-08-27 2017-04-19 华南理工大学 Method for detecting tumbling state of humanoid robot based on multi-sensor information
CN109605364A (en) * 2018-10-31 2019-04-12 北京理工大学 A kind of anthropomorphic robot falls down detection and stable control method
CN110279420A (en) * 2019-07-18 2019-09-27 郑州轻工业学院 Portable falling detection device and detection method based on extreme learning machine
CN110659595A (en) * 2019-09-10 2020-01-07 电子科技大学 Tumble type and injury part detection method based on feature classification
CN110861084A (en) * 2019-11-18 2020-03-06 东南大学 Four-legged robot falling self-resetting control method based on deep reinforcement learning
CN111655432A (en) * 2017-11-09 2020-09-11 斯图加特大学 Exoskeleton system, control device and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217107B (en) * 2014-08-27 2017-04-19 华南理工大学 Method for detecting tumbling state of humanoid robot based on multi-sensor information
CN104407611A (en) * 2014-09-30 2015-03-11 同济大学 Humanoid robot stable waling control method
KR20170010576A (en) * 2015-07-20 2017-02-01 주식회사 로보테크 Robot and apparatus for contolling motion of robot
CN111655432A (en) * 2017-11-09 2020-09-11 斯图加特大学 Exoskeleton system, control device and method
CN109605364A (en) * 2018-10-31 2019-04-12 北京理工大学 A kind of anthropomorphic robot falls down detection and stable control method
CN110279420A (en) * 2019-07-18 2019-09-27 郑州轻工业学院 Portable falling detection device and detection method based on extreme learning machine
CN110659595A (en) * 2019-09-10 2020-01-07 电子科技大学 Tumble type and injury part detection method based on feature classification
CN110861084A (en) * 2019-11-18 2020-03-06 东南大学 Four-legged robot falling self-resetting control method based on deep reinforcement learning

Similar Documents

Publication Publication Date Title
CN109773794B (en) 6-axis robot dynamics parameter identification method based on neural network
CN111260057B (en) Foot type robot terrain sensing method based on virtual sensor
CN109447128B (en) Micro-inertia technology-based walking and stepping in-place movement classification method and system
CN105259786B (en) The inertial parameter discrimination method and device of target to be identified
CN108334677B (en) UUV real-time collision avoidance planning method based on GRU network
CN110232412B (en) Human gait prediction method based on multi-mode deep learning
CN105184767B (en) A kind of movement human posture method for measuring similarity
CN110631792B (en) Seismic hybrid test model updating method based on convolutional neural network
CN110705105B (en) Modeling method and system for inverse dynamics model of robot
CN108320051B (en) Mobile robot dynamic collision avoidance planning method based on GRU network model
EP3296068B1 (en) Robot behavior generation method
CN111383273B (en) High-speed rail contact net part positioning method based on improved structure reasoning network
Liu et al. Attitude estimation of unmanned aerial vehicle based on lstm neural network
CN111531537A (en) Mechanical arm control method based on multiple sensors
CN111445498A (en) Target tracking method adopting Bi-L STM neural network
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
CN114459469A (en) Multi-motion-state navigation method and device and intelligent wearable equipment
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
CN104715133B (en) A kind of kinematics parameters in-orbit identification method and apparatus of object to be identified
CN112405568A (en) Humanoid robot falling prediction method
CN114417738B (en) Sparse IMU real-time human body motion capture and joint stress prediction method and system
CN108168546B (en) Positioning system and positioning method
CN110515299B (en) Master satellite attitude decoupling forecasting method and system of satellite-arm coupling system
El-Fakdi et al. Autonomous underwater vehicle control using reinforcement learning policy search methods
CN113609999A (en) Human body model establishing method based on gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226