CN113641103B - Running machine control method and system of self-adaptive robot - Google Patents

Running machine control method and system of self-adaptive robot Download PDF

Info

Publication number
CN113641103B
CN113641103B CN202110930741.XA CN202110930741A CN113641103B CN 113641103 B CN113641103 B CN 113641103B CN 202110930741 A CN202110930741 A CN 202110930741A CN 113641103 B CN113641103 B CN 113641103B
Authority
CN
China
Prior art keywords
robot
data
speed
running
running belt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110930741.XA
Other languages
Chinese (zh)
Other versions
CN113641103A (en
Inventor
黄政杰
李俊
吴元清
鲁仁全
席星
彭衍华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110930741.XA priority Critical patent/CN113641103B/en
Publication of CN113641103A publication Critical patent/CN113641103A/en
Application granted granted Critical
Publication of CN113641103B publication Critical patent/CN113641103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a running machine control method and system of a self-adaptive robot, and belongs to the technical field of robot debugging. The method comprises the steps of: starting the foot robot and the running machine, collecting multiple motion data of the foot robot, fusing the motion data to obtain an optimal estimated value, calculating the motion speed by using the optimal estimated value to construct a three-dimensional gesture model, and controlling the speed and gesture of the running machine and the self-adaptive robot of the inclination angle; according to the invention, the gait acquisition module is used for acquiring various motion data of the foot-type robot and carrying out data fusion, so that data deviation and error accumulation are avoided, and the running state of the robot can be acquired more accurately; according to the speed, acceleration and three-dimensional gesture model data of the foot type robot, the inclination angle of the running machine and the state of the speed self-adaptive robot are adjusted, so that more accurate actual running indexes of the robot are measured, and gait debugging is guided.

Description

Running machine control method and system of self-adaptive robot
Technical Field
The invention relates to the technical field of robot debugging, in particular to a running machine control method and system of a self-adaptive robot.
Background
The foot type robot moves by utilizing a bionic leg-foot structure, has wide application environment, flexible and stable action and good application prospect; gait data of the foot-type robot needs to be collected in the research and development process to carry out gait control and debugging, the traditional gait data collection method is to control the robot to move on the ground, the required test field is large, meanwhile, various test terrains need to be built for enriching the angle diversity and the size range of the field, the cost is high, personnel follow the robot to collect the gait data, and the consumption is large.
Publication number CN 108114405B, publication date: 2020-03-17, acquiring information of images in real time through a 3D depth camera, analyzing the motion gesture of a runner in a video, extracting motion characteristics, establishing a relation model of acceleration and human body center point, body, arm and lower limb joint position information, calculating the motion acceleration of the runner in real time, adjusting the speed of a running belt of the runner through a motor driving module, and realizing the speed self-adaptive control of the runner; but only the 3D depth camera is used for extracting motion characteristics, the accuracy is low, the data is easy to deviate, the error accumulation causes larger errors as the machine runs for a long time, the scheme can only adjust the speed of the running machine, and a rich test environment cannot be provided for the foot-type robot.
Disclosure of Invention
The invention aims to overcome the technical problems and provides a control method of a robot running machine, which has high precision and can change the inclination of a running belt.
The technical scheme of the invention is as follows:
a running machine control method of an adaptive robot comprises the following steps:
s1: the foot type robot moves on a running belt of the running machine;
s2: acquiring multiple motion data of the foot robot through a gait acquisition module;
s3: fusing the motion data to obtain an optimal estimated value;
s4: calculating the speed and the acceleration of the foot robot by using the optimal estimated value, and constructing a three-dimensional posture model of the real-time posture of the foot robot;
s5: and controlling the running belt speed and the running belt inclination angle of the running machine according to the speed, the acceleration and the three-dimensional gesture model data, so that the speed of the running belt is self-adaptive to the speed of the four-foot robot.
According to the technical scheme, the gait acquisition module is used for acquiring various motion data of the foot-type robot, so that data deviation and error accumulation are avoided, and the running state of the robot can be acquired more accurately; according to the speed, acceleration and three-dimensional gesture model data of the foot-type robot, the inclination angle of the running machine and the state of the speed self-adaptive robot are adjusted, so that more accurate actual running indexes of the robot are measured, and gait debugging is guided; researchers can also directly control the inclination of the running belt of the running machine, provide richer test scenes for robot tests, do not need overlarge test fields, save cost and reduce the labor intensity in the test process.
Further, in step S2, the gait acquisition module acquires radar depth data and camera depth data through a laser radar and a depth camera, respectively.
In the technical scheme, the laser radar is used for collecting the depth information of the foot-type robot on the running belt, the precision can reach more than one thousandth, and the scanning speed can reach MHz; and meanwhile, two sensors are adopted to compare and fuse the data of the two sensors of the depth camera and the laser radar, so that the phenomenon that the observed value of a single sensor is adopted independently and the generated data offset and error accumulation are avoided.
Further, the manner of fusing the motion data in step S3 is kalman fusion, and the specific method of kalman fusion is as follows:
let the k target point positions measured by the depth camera in the time range of 0-N be Z 0,k ={P 1 ,...,P k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P k Representing the position of the kth target point, p= (x) 2 +y 2 +z 2 ) 1/2 X, y and z are coordinate values of a target point with the depth camera as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure BDA0003211191740000021
Z 0 For the position of k target points at a certain moment +.>
Figure BDA0003211191740000022
Mean value of normal distribution is mu 0 The variance of the normal distribution is +.>
Figure BDA0003211191740000023
Firstly, initializing the position and error covariance of a mobile robot; in the shape ofIn the state prediction stage, the motion data of the robot is acquired by using a depth camera, the value is used as a system state value at the current moment,
Figure BDA0003211191740000024
as a predicted value for the pose at the next moment; />
Figure BDA0003211191740000025
Covariance matrix sigma of (2) 0,k The calculation formula is as follows:
Figure BDA0003211191740000026
Figure BDA0003211191740000027
representing the variance of cycle 1 under normal distribution followed by depth camera measurements.
Similarly, let the depth information value measured by the laser radar be Z 1,k ={P’ 1 ,...,P’ k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P' k Representing the position of the kth target point, P '= (x' 2 +y’ 2 +z’ 2 ) 1/2 X ', y ', z ' are coordinate values of a target point with the laser radar as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure BDA0003211191740000031
The covariance matrix is shown as formula (2);
Figure BDA0003211191740000032
the measured values of the two groups of sensors are subjected to Kalman fusion, and the obtained result is represented by the formula Z 0,k ={P 1 ,...,P k } N And Z 1,k ={P’ 1 ,...,P’ k } N Calculation of the observed vector in Kalman Filter
Figure BDA0003211191740000033
Has the following components
Figure BDA0003211191740000034
Covariance is:
Figure BDA0003211191740000035
0 N representing an N-order zero matrix
Kalman prediction equation has A k =0 N And B k =I N
Figure BDA0003211191740000036
I N Is an identity matrix of order N,
Figure BDA0003211191740000037
Figure BDA0003211191740000038
the Kalman update equation has
Figure BDA0003211191740000039
Figure BDA00032111917400000310
N|N =(I-K N H k )∑ N|N-1 (9)
Wherein:
sigma represents the combined matrix of the two sets of covariances of formulas (1) and (2);
K k representative is the kalman gain factor;
N|N represented is a variance matrix update equation;
Figure BDA00032111917400000311
representing the predicted observed value, namely, equation (5) is a prediction equation;
Σn|n-1 represents the prediction covariance;
A k and B k Is a system parameter, specifically, A k State transition matrix, B k Is a control matrix;
u k a control amount indicating the running belt speed and the running belt inclination;
so that the optimal estimated value obtained by calculating the sensor data fusion by using the Kalman gain is finally obtained
Figure BDA0003211191740000041
Calculated by the formula (8)>
Figure BDA0003211191740000042
According to the technical scheme, the measured values of the laser radar and the depth camera are used as the observed values, the initial measured value of the depth camera is used as the state value, the follow-up data fusion is facilitated, meanwhile, the data of the two sensors which are accurate are selected for data fusion, the data of the single sensor are incompletely adopted, the two groups of data are compared and verified through Kalman fusion, and the accurate foot type robot motion gesture data are obtained through fusion.
Further, the step S4 of calculating the speed of the foot robot using the optimal estimated value includes:
s41: presetting a fixed time period T, and acquiring the position a of the periodic starting foot type robot on the running belt 0 Position a of periodic end-foot robot on running belt t Speed V of the running belt in period T 0 The position a 0 And a t Respectively obtaining the optimal estimated value at the moment;
s42: calculating the relative speed V of the robot and the running belt 1 =(a t -a 0 ) Calculating the actual movement speed V of the robot, and if the robot moves backwards along the running direction of the running belt, V=V 0 -V 1 If the robot is advanced against the running direction of the running belt, v=v 0 +V 1
Further, the three-dimensional gesture model in step S4 is obtained by constructing a Webots simulation platform.
Further, the method for constructing the three-dimensional attitude model comprises the following steps: acquiring actual size data of a foot robot, gyroscope data of the foot robot and acceleration and speed data; inputting the actual size data of the foot robot, the gyroscope data of the foot robot and the acceleration and speed data into a robot model of a Webots simulation platform to obtain the three-dimensional attitude model.
Further, the method comprises the steps of: the system comprises a gait acquisition module, a data fusion module, a model construction module, a running machine control module, a running belt and an angle control module;
the gait acquisition module acquires multiple motion data of the foot robot, the data fusion module fuses the motion data to obtain an optimal estimated value, the model construction module calculates the speed and the acceleration of the foot robot by using the optimal estimated value and constructs a three-dimensional gesture model of the real-time gesture of the foot robot, the treadmill control module is connected with the running belt and the angle control module, the angle control module is used for controlling the inclination angle of the running belt, and the treadmill control module outputs control instructions to the running belt and the angle control module according to the speed, the acceleration and the three-dimensional gesture model data to control the speed and the inclination angle of the running belt so as to enable the treadmill to adapt to the state of the robot, thereby measuring the accurate motion index of the robot.
Further, the angle control module controls the inclination angle of the running belt through one or more push rods arranged below the running belt.
In the above technical scheme, the push rod can electric control flexible length, and the push rod lower extreme is fixed on the support, and the push rod upper end supports the area of running, and through the push rod is flexible, makes the area of running by push rod supporting position rise or reduce to control the inclination of area of running.
Further, the push rods are four in number and are respectively arranged below the four corners of the running belt.
In the above technical scheme, four push rod bottom all is fixed on the support, and the supporting position of running the area is connected on the push rod top, will run the area through four push rods and prop up, and angle control module control push rod is flexible to control the inclination who runs the area.
Further, the gait acquisition module comprises a laser radar and a depth camera, the laser radar and the depth camera are connected with the data fusion module, and radar depth data acquired by the laser radar and camera depth data acquired by the depth camera are transmitted to the data fusion module for fusion processing.
According to the technical scheme, the gait acquisition module is used for acquiring various motion data of the foot-type robot and carrying out data fusion, so that data deviation and error accumulation are avoided, and the running state of the robot can be acquired more accurately; according to the speed, acceleration and three-dimensional gesture model data of the foot-type robot, the inclination angle of the running machine and the state of the speed self-adaptive robot are adjusted, so that more accurate actual running indexes of the robot are measured, and gait debugging is guided; researchers can also directly control the inclination of the running belt of the running machine, provide richer test scenes for robot tests, do not need overlarge test fields, save cost and reduce the labor intensity in the test process.
Drawings
FIG. 1 is a flow chart of the method steps of the present invention;
FIG. 2 is a schematic diagram of a system architecture according to the present invention.
Detailed Description
In order to clearly illustrate the method and system for controlling the running machine of the adaptive robot of the present invention, the present invention will be further described with reference to examples and drawings, but the scope of the present invention should not be limited thereto.
Example 1
A method for controlling a treadmill of an adaptive robot, as shown in fig. 1, the method comprising the steps of:
s1: the foot type robot moves on a running belt of the running machine;
s2: acquiring multiple motion data of the foot robot through a gait acquisition module;
s3: fusing the motion data to obtain an optimal estimated value;
s4: calculating the speed and the acceleration of the foot robot by using the optimal estimated value, and constructing a three-dimensional posture model of the real-time posture of the foot robot;
s5: and controlling the running belt speed and the running belt inclination angle of the running machine according to the speed, the acceleration and the three-dimensional gesture model data, so that the speed of the running belt is self-adaptive to the speed of the four-foot robot.
According to the technical scheme, the gait acquisition module is used for acquiring various motion data of the foot-type robot and carrying out data fusion, so that data deviation and error accumulation are avoided, and the running state of the robot can be acquired more accurately; according to the speed, acceleration and three-dimensional gesture model data of the foot-type robot, the inclination angle of the running machine and the state of the speed self-adaptive robot are adjusted, so that more accurate actual running indexes of the robot are measured, and gait debugging is guided; researchers can also directly control the inclination of the running belt of the running machine, provide richer test scenes for robot tests, do not need overlarge test fields, save cost and reduce the labor intensity in the test process.
Example 2
A method for controlling a treadmill of an adaptive robot, as shown in fig. 1, the method comprising the steps of:
s1: the foot type robot moves on a running belt of the running machine;
s2: acquiring multiple motion data of the foot robot through a gait acquisition module;
the gait acquisition module includes a lidar and a depth camera, and the motion data includes radar depth data and camera depth data. Taking the initial measured value of the camera depth data as a state value, and taking the real-time measured values of the radar depth data and the camera depth data as observed values.
In this embodiment, the laser radar and the depth camera are both disposed on the treadmill body, and in other embodiments, the laser radar and the depth camera may be disposed beside the treadmill separately, and in other embodiments, the ultra wideband technology uwb positioning may be used to replace the radar depth data and the camera depth data in this embodiment.
In the embodiment, the measured values of the laser radar and the depth camera are used as the observed values, and the initial measured value of the depth camera is used as the state value, so that the follow-up data fusion is facilitated; the laser radar is used for collecting depth information of the foot robot on the running belt, the precision can reach more than one thousandth, and the scanning speed can reach MHz; and meanwhile, two sensors are adopted to compare and fuse the data of the two sensors of the depth camera and the laser radar, so that the phenomenon that the observed value of a single sensor is adopted independently and the generated data offset and error accumulation are avoided.
S3: fusing the motion data to obtain an optimal estimated value;
the mode of the motion data fusion is Kalman fusion, and the Kalman filtering-based data fusion is specifically as follows:
let the k target point positions measured by the depth camera in the time range of 0-N be Z 0,k ={P 1 ,...,P k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P k Representing the position of the kth target point, p= (x) 2 +y 2 +z 2 ) 1/2 X, y and z are coordinate values of a target point with the depth camera as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure BDA0003211191740000071
Z 0 For the position of k target points at a certain moment +.>
Figure BDA0003211191740000072
Mean value of normal distribution is mu 0 The variance of the normal distribution is +.>
Figure BDA0003211191740000073
First initialize the movementPosition and error covariance of the robot; in the state prediction stage, the depth camera is used for collecting the motion data of the robot, the value is used as the state value of the system at the current moment,
Figure BDA0003211191740000074
as a predicted value for the pose at the next moment; />
Figure BDA0003211191740000075
Covariance matrix sigma of (2) 0,k The calculation formula is as follows:
Figure BDA0003211191740000076
Figure BDA0003211191740000077
representing the variance of cycle 1 under normal distribution followed by depth camera measurements.
Similarly, let the depth information value measured by the laser radar be Z 1,k ={P’ 1 ,...,P’ k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P' k Representing the position of the kth target point, P '= (x' 2 +y’ 2 +z’ 2 ) 1/2 X ', y ', z ' are coordinate values of a target point with the laser radar as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure BDA0003211191740000078
The covariance matrix is shown as formula (2);
Figure BDA0003211191740000079
the measured values of the two groups of sensors are subjected to Kalman fusion, and the obtained result is represented by the formula Z 0,k ={P 1 ,...,P k } N And Z 1,k ={P’ 1 ,...,P’ k } N Calculation of the observed vector in Kalman Filter
Figure BDA00032111917400000710
Has the following components
Figure BDA00032111917400000711
Covariance is:
Figure BDA00032111917400000712
0 N representing an N-order zero matrix
Kalman prediction equation has A k =0 N And B k =I N
Figure BDA00032111917400000713
I N Is an identity matrix of order N,
Figure BDA0003211191740000081
Figure BDA0003211191740000082
the Kalman update equation has
Figure BDA0003211191740000083
Figure BDA0003211191740000084
/>
N|N =(I-K N H k )∑ N|N-1 (9)
Wherein:
sigma represents the combined matrix of the two sets of covariances of formulas (1) and (2);
K k representative is the kalman gain factor;
N|N represented is a variance matrix update equation;
Figure BDA0003211191740000085
representing the predicted observed value, namely, equation (5) is a prediction equation;
Σn|n-1 represents the prediction covariance;
A k and B k Is a system parameter, specifically, A k State transition matrix, B k Is a control matrix;
u k a control amount indicating the running belt speed and the running belt inclination;
so that the optimal estimated value obtained by calculating the sensor data fusion by using the Kalman gain is finally obtained
Figure BDA0003211191740000086
Calculated by the formula (8)>
Figure BDA0003211191740000087
In this embodiment, instead of selecting to refer to only one set of measurement results, the observed values of the two sets of sensors are fused, the more accurate data of the two sensors are selected, the data of a single sensor are incompletely adopted, the two sets of data are compared and verified through kalman fusion, and the more accurate motion gesture data of the foot robot are obtained through fusion. In addition, in the embodiment, measured values of the laser radar and the depth camera are used as observation values, and initial measured values of the depth camera are used as state values, so that data fusion can be conveniently carried out subsequently.
S4: calculating the speed and the acceleration of the foot robot by using the optimal estimated value, and constructing a three-dimensional posture model of the real-time posture of the foot robot;
the step of calculating the speed of the foot robot comprises:
s41: presetting a fixed time period T, and acquiring the period of time of the foot starting robotPosition a on the running belt 0 Position a of periodic end-foot robot on running belt t Speed V of the running belt in period T 0 The position a 0 And a t Respectively obtaining the optimal estimated value at the moment;
s42: calculating the relative speed V of the robot and the running belt 1 =(a t -a 0 ) Calculating the actual movement speed V of the robot, and if the robot moves backwards along the running direction of the running belt, V=V 0 -V 1 If the robot is advanced against the running direction of the running belt, v=v 0 +V 1 . According to the relative speed V of the calculation robot and the running belt 1 The speed of the running belt of the robot running machine is regulated.
The acceleration is calculated by the ratio of the change in velocity over a period of time to the interval time.
The three-dimensional attitude model is obtained by constructing a Webots simulation platform.
S5: and controlling the running belt speed and the running belt inclination angle of the running machine according to the speed, the acceleration and the three-dimensional gesture model data, so that the speed of the running belt is self-adaptive to the speed of the four-foot robot.
The inclination angle of the running belt is controlled by controlling a plurality of push rods arranged below the running belt. The push rods are four in number and are respectively arranged at four corners below the running belt. The data required to be input to the robot model of the Webots simulation platform for constructing the three-dimensional gesture model comprises: the actual size data of the foot robot, the gyroscope data of the foot robot, and the acceleration and speed data.
According to the technical scheme, the gait acquisition module is used for acquiring various motion data of the foot-type robot and carrying out data fusion, so that data deviation and error accumulation are avoided, and the running state of the robot can be acquired more accurately; according to the speed, acceleration and three-dimensional gesture model data of the foot-type robot, the inclination angle of the running machine and the state of the speed self-adaptive robot are adjusted, so that more accurate actual running indexes of the robot are measured, and gait debugging is guided; researchers can also directly control the inclination of the running belt of the running machine, provide richer test scenes for robot tests, do not need overlarge test fields, save cost and reduce the labor intensity in the test process.
Example 3
A treadmill control system of an adaptive robot, comprising: the system comprises a gait acquisition module, a data fusion module, a model construction module, a running machine control module, a running belt and an angle control module; the gait acquisition module acquires multiple motion data of the foot robot, the data fusion module fuses the motion data to obtain an optimal estimated value, the model construction module calculates the speed and the acceleration of the foot robot by using the optimal estimated value and constructs a three-dimensional gesture model of the real-time gesture of the foot robot, the treadmill control module is connected with the running belt and the angle control module, the angle control module is used for controlling the inclination angle of the running belt, and the treadmill control module outputs control instructions to the running belt and the angle control module according to the speed, the acceleration and the three-dimensional gesture model data to control the speed and the inclination angle of the running belt so as to enable the treadmill to adapt to the state of the robot, thereby measuring the accurate motion index of the robot.
Further, the angle control module comprises one or more push rods, wherein the push rods are arranged below the running belt, and the inclination angle of the running belt is changed by controlling the length of the push rods. In this embodiment, four push rods are provided below the four corners of the running belt.
The gait acquisition module comprises a laser radar and a depth camera, wherein the laser radar and the depth camera are connected with the data fusion module, and radar depth data acquired by the laser radar and camera depth data acquired by the depth camera are transmitted to the data fusion module for fusion processing.
The embodiment further comprises a communication module for uploading the foot robot data to a cloud or remote computer.

Claims (7)

1. A method of controlling a treadmill of an adaptive robot, the method comprising the steps of:
s1: the foot type robot moves on a running belt of the running machine;
s2: acquiring multiple motion data of the foot robot through a gait acquisition module;
s3: fusing the motion data to obtain an optimal estimated value;
s4: calculating the speed and the acceleration of the foot robot by using the optimal estimated value, and constructing a three-dimensional posture model of the real-time posture of the foot robot;
s5: controlling the running belt speed and the running belt inclination angle of the running machine according to the speed, the acceleration and the three-dimensional gesture model data, so that the speed of the running belt is self-adaptive to the speed of the four-foot robot;
step S2, the gait acquisition module acquires radar depth data and camera depth data through a laser radar and a depth camera respectively;
the mode of fusing the motion data in the step S3 is Kalman fusion, and the specific method of the Kalman fusion is as follows:
let the k target point positions measured by the depth camera in the time range of 0-N be Z 0,k ={P 1 ,...,P k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P k Representing the position of the kth target point, p= (x) 2 +y 2 +z 2 ) 1/2 X, y and z are coordinate values of a target point with the depth camera as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure FDA0004135829610000011
Z 0 For the position of k target points at a certain moment +.>
Figure FDA0004135829610000012
Mean value of normal distribution is mu 0 The variance of the normal distribution is +.>
Figure FDA0004135829610000013
Firstly, initializing the position and error covariance of a mobile robot; in the state prediction stage, the depth camera is used for collecting the motion data of the robot, the value is used as the state value of the system at the current moment,
Figure FDA0004135829610000014
as a predicted value for the pose at the next moment; />
Figure FDA0004135829610000015
Covariance matrix sigma of (2) 0,k The calculation formula is as follows:
Figure FDA0004135829610000016
Figure FDA0004135829610000017
representing the variance of cycle 1 under normal distribution followed by depth camera measurements;
similarly, let the depth information value measured by the laser radar be Z 1,k ={P’ 1 ,...,P’ k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P' k Representing the position of the kth target point, P '= (x' 2 +y’ 2 +z’ 2 ) 1/2 X ', y ', z ' are coordinate values of a target point with the laser radar as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure FDA0004135829610000018
The covariance matrix is shown as formula (2);
Figure FDA0004135829610000021
the measured values of the two groups of sensors are subjected to Kalman fusion, and the obtained result is represented by the formula Z 0,k ={P 1 ,...,P k } N And Z 1,k ={P’ 1 ,...,P’ k } N Calculation of the observed vector in Kalman Filter
Figure FDA0004135829610000022
There is->
Figure FDA0004135829610000023
Covariance is:
Figure FDA0004135829610000024
0 N representing an N-order zero matrix
Kalman prediction equation has A k =0 N And B k =I N
Figure FDA0004135829610000025
I N Is an identity matrix of order N,
Figure FDA0004135829610000026
Figure FDA0004135829610000027
the Kalman update equation has
Figure FDA0004135829610000028
Figure FDA0004135829610000029
N|N =(I-K N H k )∑ N|N-1 (9)
Wherein:
sigma represents the combined matrix of the two sets of covariances of formulas (1) and (2);
K k representative is the kalman gain factor;
N|N represented is a variance matrix update equation;
Figure FDA00041358296100000210
representing the predicted observed value, namely, equation (5) is a prediction equation;
Σn|n-1 represents the prediction covariance;
A k and B k Is a system parameter, specifically, A k State transition matrix, B k Is a control matrix;
u k a control amount indicating the running belt speed and the running belt inclination;
so that the optimal estimated value obtained by calculating the sensor data fusion by using the Kalman gain is finally obtained
Figure FDA0004135829610000031
Calculated by the formula (8)>
Figure FDA0004135829610000032
2. The method of controlling a treadmill of an adaptive robot according to claim 1, wherein the step S4 of calculating the speed of the foot robot using the optimal estimated value comprises:
s41: presetting a fixed time period T, and acquiring the position a of the period starting foot type robot on the running belt 0 Position a of periodic end-foot robot on running belt t Speed V of the running belt in period T 0 The position a 0 And a t Respectively obtaining the optimal estimated value at the moment;
s42: calculating the relative speed V of the robot and the running belt 1 =(a t -a 0 ) Calculating the actual movement speed V of the robot, and if the robot moves backwards along the running direction of the running belt, V=V 0 -V 1 If the robot is against the running direction of the running beltAdvancing, then v=v 0 +V 1
3. The method for controlling a running machine of an adaptive robot according to claim 1, wherein the three-dimensional posture model in step S4 is constructed by using a Webots simulation platform.
4. The method for controlling a running machine of an adaptive robot according to claim 3, wherein the method for constructing the three-dimensional posture model is as follows: acquiring actual size data of a foot robot, gyroscope data of the foot robot and acceleration and speed data; inputting the actual size data of the foot robot, the gyroscope data of the foot robot and the acceleration and speed data into a robot model of a Webots simulation platform to obtain the three-dimensional attitude model.
5. Treadmill control system of self-adaptation robot, characterized by, include: the system comprises a gait acquisition module, a data fusion module, a model construction module, a running machine control module, a running belt and an angle control module;
the gait acquisition module acquires multiple motion data of the foot robot, the data fusion module fuses the motion data to obtain an optimal estimated value, the model construction module calculates the speed and the acceleration of the foot robot by using the optimal estimated value and constructs a three-dimensional posture model of the real-time posture of the foot robot, the treadmill control module is connected with the running belt and the angle control module, the angle control module is used for controlling the inclination angle of the running belt, and the treadmill control module outputs a control instruction to the running belt and the angle control module according to the speed, the acceleration and the three-dimensional posture model data to control the speed and the inclination angle of the running belt so as to enable the treadmill to adapt to the state of the robot, thereby measuring the accurate motion index of the robot;
the gait acquisition module comprises a laser radar and a depth camera, wherein the laser radar and the depth camera are connected with the data fusion module, and radar depth data acquired by the laser radar and camera depth data acquired by the depth camera are transmitted to the data fusion module for fusion processing;
the data fusion module adopts Kalman fusion, and the specific method of the Kalman fusion is as follows:
let the k target point positions measured by the depth camera in the time range of 0-N be Z 0,k ={P 1 ,...,P k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P k Representing the position of the kth target point, p= (x) 2 +y 2 +z 2 ) 1/2 X, y and z are coordinate values of a target point with the depth camera as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure FDA0004135829610000041
Z 0 For the position of k target points at a certain moment +.>
Figure FDA0004135829610000042
Mean value of normal distribution is mu 0 The variance of the normal distribution is +.>
Figure FDA0004135829610000043
Firstly, initializing the position and error covariance of a mobile robot; in the state prediction stage, the depth camera is used for collecting the motion data of the robot, the value is used as the state value of the system at the current moment,
Figure FDA0004135829610000044
as a predicted value for the pose at the next moment; />
Figure FDA0004135829610000045
Covariance matrix sigma of (2) 0,k The calculation formula is as follows:
Figure FDA0004135829610000046
Figure FDA0004135829610000047
representing the variance of cycle 1 under normal distribution followed by depth camera measurements;
similarly, let the depth information value measured by the laser radar be Z 1,k ={P’ 1 ,...,P’ k } N The method comprises the steps of carrying out a first treatment on the surface of the Wherein N represents the N-th period, P' k Representing the position of the kth target point, P '= (x' 2 +y’ 2 +z’ 2 ) 1/2 X ', y ', z ' are coordinate values of a target point with the laser radar as an origin; and the depth camera measurement follows a normal Gaussian variable, i.e. there is
Figure FDA0004135829610000048
The covariance matrix is shown as formula (2); />
Figure FDA0004135829610000049
The measured values of the two groups of sensors are subjected to Kalman fusion, and the obtained result is represented by the formula Z 0,k ={P 1 ,...,P k } N And Z 1,k ={P’ 1 ,...,P’ k } N Calculation of the observed vector in Kalman Filter
Figure FDA00041358296100000410
Has the following components
Figure FDA00041358296100000411
Covariance is:
Figure FDA00041358296100000412
0 N representing an N-order zero matrix
Kalman prediction equation has A k =0 N And B k =I N
Figure FDA0004135829610000051
I N Is an identity matrix of order N,
Figure FDA0004135829610000052
Figure FDA0004135829610000053
the Kalman update equation has
Figure FDA0004135829610000054
Figure FDA0004135829610000055
N|N =(I-K N H k )∑ N|N-1 (9)
Wherein:
sigma represents the combined matrix of the two sets of covariances of formulas (1) and (2);
K k representative is the kalman gain factor;
Σ N|N represented is a variance matrix update equation;
Figure FDA0004135829610000056
representing the predicted observed value, namely, equation (5) is a prediction equation;
Σn|n-1 represents the prediction covariance;
A k and B k Is a system parameter, specifically, A k State transition matrix, B k Is a control matrix;
u k a control amount indicating the running belt speed and the running belt inclination;
so that the optimal estimated value obtained by calculating the sensor data fusion by using the Kalman gain is finally obtained
Figure FDA0004135829610000057
Calculated by the formula (8)>
Figure FDA0004135829610000058
6. The adaptive robot treadmill control system of claim 5, wherein the angle control module controls the tilt angle of the treadmill via one or more pushrods disposed below the treadmill.
7. The adaptive robot treadmill control system of claim 6, wherein the push rods are four in number and are disposed below the four corners of the running belt, respectively.
CN202110930741.XA 2021-08-13 2021-08-13 Running machine control method and system of self-adaptive robot Active CN113641103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110930741.XA CN113641103B (en) 2021-08-13 2021-08-13 Running machine control method and system of self-adaptive robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110930741.XA CN113641103B (en) 2021-08-13 2021-08-13 Running machine control method and system of self-adaptive robot

Publications (2)

Publication Number Publication Date
CN113641103A CN113641103A (en) 2021-11-12
CN113641103B true CN113641103B (en) 2023-04-25

Family

ID=78421514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110930741.XA Active CN113641103B (en) 2021-08-13 2021-08-13 Running machine control method and system of self-adaptive robot

Country Status (1)

Country Link
CN (1) CN113641103B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114770601B (en) * 2022-05-11 2023-03-21 武汉科技大学 Foot type robot motion experiment table
CN115937895B (en) * 2022-11-11 2023-09-19 南通大学 Speed and strength feedback system based on depth camera
CN117032285B (en) * 2023-08-18 2024-03-29 五八智能科技(杭州)有限公司 Foot type robot movement method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106323662A (en) * 2016-09-18 2017-01-11 吉林大学 Multi-pavement motion mechanics observation system for foot type walking objects
CN106648116A (en) * 2017-01-22 2017-05-10 隋文涛 Virtual reality integrated system based on action capture
DE102015222119A1 (en) * 2015-11-10 2017-05-11 Robert Bosch Gmbh Control for a treadmill with a control unit and with a laser distance sensor
CN108114405A (en) * 2017-12-20 2018-06-05 中国科学院合肥物质科学研究院 Treadmill Adaptable System based on 3D depth cameras and flexible force sensitive sensor
CN108836757A (en) * 2018-07-09 2018-11-20 浙江大学城市学院 A kind of assisted walk exoskeleton robot system with self-regulation
CN110147106A (en) * 2019-05-29 2019-08-20 福建(泉州)哈工大工程技术研究院 Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
CN110270053A (en) * 2019-07-04 2019-09-24 杭州启望科技有限公司 A kind of treadmill method for testing motion and device based on laser sensor
WO2020053711A1 (en) * 2018-09-13 2020-03-19 Tecnobody S.R.L. Integrated method and system for the dynamic control of the speed of a treadmill
CN111366153A (en) * 2020-03-19 2020-07-03 浙江大学 Positioning method for tight coupling of laser radar and IMU
CN112807617A (en) * 2021-02-22 2021-05-18 苏州进动智能科技有限公司 Running safety monitoring and guiding method and equipment based on three-dimensional camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015222119A1 (en) * 2015-11-10 2017-05-11 Robert Bosch Gmbh Control for a treadmill with a control unit and with a laser distance sensor
CN106323662A (en) * 2016-09-18 2017-01-11 吉林大学 Multi-pavement motion mechanics observation system for foot type walking objects
CN106648116A (en) * 2017-01-22 2017-05-10 隋文涛 Virtual reality integrated system based on action capture
CN108114405A (en) * 2017-12-20 2018-06-05 中国科学院合肥物质科学研究院 Treadmill Adaptable System based on 3D depth cameras and flexible force sensitive sensor
CN108836757A (en) * 2018-07-09 2018-11-20 浙江大学城市学院 A kind of assisted walk exoskeleton robot system with self-regulation
WO2020053711A1 (en) * 2018-09-13 2020-03-19 Tecnobody S.R.L. Integrated method and system for the dynamic control of the speed of a treadmill
CN110147106A (en) * 2019-05-29 2019-08-20 福建(泉州)哈工大工程技术研究院 Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
CN110270053A (en) * 2019-07-04 2019-09-24 杭州启望科技有限公司 A kind of treadmill method for testing motion and device based on laser sensor
CN111366153A (en) * 2020-03-19 2020-07-03 浙江大学 Positioning method for tight coupling of laser radar and IMU
CN112807617A (en) * 2021-02-22 2021-05-18 苏州进动智能科技有限公司 Running safety monitoring and guiding method and equipment based on three-dimensional camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Sumit Hazra等.Novel data fusion strategy for human gait analysis using multiple kinect sensors.Biomedical Signal Processing and Control.2021,第67卷第1-14页. *

Also Published As

Publication number Publication date
CN113641103A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN113641103B (en) Running machine control method and system of self-adaptive robot
CN107515621B (en) Line patrol unmanned aerial vehicle flight trajectory control method based on power transmission line electromagnetic sensing
CN104808590B (en) Mobile robot visual servo control method based on key frame strategy
CN108427282A (en) A kind of solution of Inverse Kinematics method based on learning from instruction
CN109655059B (en) Vision-inertia fusion navigation system and method based on theta-increment learning
CN109323695A (en) A kind of indoor orientation method based on adaptive Unscented kalman filtering
CN110298854A (en) The snakelike arm co-located method of flight based on online adaptive and monocular vision
CN113910218B (en) Robot calibration method and device based on kinematic and deep neural network fusion
CN109916396A (en) A kind of indoor orientation method based on multidimensional Geomagnetism Information
CN108959713A (en) Target range and face positional shift measurement method based on convolutional neural networks
US20230278214A1 (en) Robot localization using variance sampling
CN105173102B (en) A kind of quadrotor stability augmentation system based on many images and method
Shao et al. Research on target tracking system of quadrotor uav based on monocular vision
Yang et al. Robust navigation method for wearable human–machine interaction system based on deep learning
Wang et al. Arbitrary spatial trajectory reconstruction based on a single inertial sensor
CN107124761B (en) Cellular network wireless positioning method fusing PSO and SS-ELM
CN105807792A (en) On-chip controller of scanning ion conductance microscope and control method
CN109031339A (en) A kind of three-dimensional point cloud motion compensation process
CN112307917A (en) Indoor positioning method integrating visual odometer and IMU
Wang et al. G-ROBOT: An intelligent greenhouse seedling height inspection robot
Chen et al. Learning trajectories for visual-inertial system calibration via model-based heuristic deep reinforcement learning
CN111158363A (en) Macro-micro navigation method based on control time sequence combination
CN112232484B (en) Space debris identification and capture method and system based on brain-like neural network
CN112907644B (en) Machine map-oriented visual positioning method
Nguyen et al. Development of a smart shoe for building a real-time 3d map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant