CN109333534B - Preplanned real-time gait control algorithm - Google Patents

Preplanned real-time gait control algorithm Download PDF

Info

Publication number
CN109333534B
CN109333534B CN201811239754.7A CN201811239754A CN109333534B CN 109333534 B CN109333534 B CN 109333534B CN 201811239754 A CN201811239754 A CN 201811239754A CN 109333534 B CN109333534 B CN 109333534B
Authority
CN
China
Prior art keywords
joint
robot
obstacle
coordinate system
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811239754.7A
Other languages
Chinese (zh)
Other versions
CN109333534A (en
Inventor
丁宇
杜玉晓
黄修平
卢冠雄
陈梓瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201811239754.7A priority Critical patent/CN109333534B/en
Publication of CN109333534A publication Critical patent/CN109333534A/en
Application granted granted Critical
Publication of CN109333534B publication Critical patent/CN109333534B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a preplanned real-time gait control algorithm, which is characterized in that under the control of a biped robot controller, a biped robot kinematic coordinate system model is established firstly, then the posture planning and stability judgment of the robot are carried out, finally, the gait walking is completed, and the next posture is planned in real time so as to ensure the walking stability of the robot. In the aspect of complexity of the algorithm, the required information amount is small, and the method is simple; in the aspect of timeliness of the algorithm, the method needs a small amount of information and is low in implementation complexity, so that the processing time of the algorithm is reduced; in the aspect of practical application, the algorithm can combine the practical characteristics of the robot and the terrain, the action of each step of the robot is designed in advance, so that the robot can complete corresponding action according to a plan, and the obstacle avoidance is carried out by acquiring the image information in front in real time, so that the walking success rate is ensured.

Description

Preplanned real-time gait control algorithm
Technical Field
The invention relates to the technical field of gait algorithms, in particular to a preplanned real-time gait control algorithm.
Background
The robot integrates the latest research results of various disciplines, represents the highest technical level of mechanical-electrical integration, and provides practical opportunities for the field of artificial intelligence. And the biped robot is one of the robots with the highest flexibility and adaptability, has the potential of replacing human to work in severe environment, and becomes a hot point of domestic and foreign research. The gait planning is beneficial to improving the stable walking ability of the robot, so the gait planning has very important significance for the research in the aspect. The common gait algorithms at present are: compasses gait algorithm, three-dimensional linear inverted pendulum model, capture point theory and the like summarize that the prior art can be divided into two types. The first type of model is simple and easy to calculate, but the stability is poor; the second type has strong stability and low energy consumption, but the calculation is complicated and the calculation amount is huge.
Therefore, how to provide a gait algorithm with simple calculation and strong stability becomes a problem to be solved by those skilled in the art at present.
Disclosure of Invention
The invention aims to provide a preplanned real-time gait control algorithm, which is simplified as far as possible on the premise of ensuring the stability.
In order to realize the task, the invention adopts the following technical scheme:
a pre-programmed real-time gait control algorithm comprising the steps of:
step 1, establishing a biped robot kinematic coordinate system model
The joint of the biped robot body is provided with a steering engine for driving the joint to rotate, wherein the bare joint of the leg of the robot is provided with a forward joint and a lateral joint, the knee joint is provided with a forward joint, and the hip joint is provided with a forward joint, a lateral joint and a horizontal rotating joint;
step 1.1, numbering each joint of the robot, determining an origin of each joint for a joint i, wherein the principle of determining the origin is as follows:
when the axis of the joint i is intersected with the axis of the joint i +1, taking the intersection point as the origin of the joint i, and when the axis of the joint i is different from the axis of the joint i +1, taking the intersection point of the common perpendicular line of the two axes and the axis of the joint i +1 as the origin of the joint i; when the axis of the joint i is parallel to the axis of the joint i +1, the intersection point of the common perpendicular line of the axis of the joint i +1 and the axis of the joint i +2 and the axis of the joint i +1 is taken as the origin of the joint i and is marked as Oi
Step 1.2, determining each coordinate axis X of a coordinate system at the joint ii、Yi、ZiEstablishing a coordinate system at each joint, wherein the determination principle is as follows:
Zithe coordinate axis coinciding with the axis of the joint i +1, XiThe coordinate axes are along the common vertical line of the axis of the i joint and the axis of the i +1 joint and point to the i +1 joint, YiThe coordinate axis is determined according to a right-hand rule;
step 1.3, according to the coordinate systems established at each joint, two adjacent coordinate systems OiAnd Oi-1Comparing, obtaining the difference between two adjacent coordinate systems, namely a transformation matrix;
obtaining a kinematic relationship between two adjacent coordinate systems through the transformation matrix, recording the coordinate system of the left foot knee joint of the biped robot as a reference coordinate system, and obtaining the kinematic relationship between other joint coordinate systems and the reference coordinate system; taking a transformation matrix between a coordinate system at each joint and a reference coordinate system as a mathematical model between the joint and the joint where the reference coordinate system is located, wherein all the mathematical models form a biped robot kinematic coordinate system model;
step 2, planning the postures of the biped robot
Step 2.1, designing a walking gait process of the biped robot
The robot changes the gravity center by swinging the front pendulum of the legs, and when the front pendulum of the legs is swung, the gravity center of the body moves to the supporting legs from the centers of the two legs, so that the body firstly twists to one side of the supporting legs to move the gravity center, namely, the gravity center is adjusted by the lateral joints of the hip joints; in order to prevent the body from falling to one side of the swing leg caused by lifting the leg during side twisting, the side twisting and leg lifting are designed step by step in gait design; the joint gait plan of the robot during one walking is as follows:
firstly, the gravity center is adjusted by twisting the right side of the hip joint of the right leg and the lateral joint of the bare joint, the lateral twisting is kept in place, and then the bare joint of the left leg, the knee joint and the forward joint of the hip joint are bent to lift the left leg; the left leg is put down to perform the opposite action, namely, the left side of the hip joint and the naked joint of the right leg is twisted, and then the naked joint, the knee joint and the anterior joint of the hip joint of the left leg are bent;
step 2.2, planning the postures of all joints when the robot walks
Firstly, assuming that the lateral direction of the hip joint does not change, the step of solving the walking track is as follows:
(1) drawing a curve graph of the change of the hip joint and the bare joint along with the change of time, thereby obtaining the kinematic trajectory of the hip joint and the bare joint;
(2) comparing with a human walking track, determining key points of motion stress of hip joints and bare joints of the robot according to the stress condition in the human walking process, and then performing interpolation processing on all the key points in a cubic spline interpolation mode;
(3) respectively fitting the key points after the hip joint and the bare joint are processed with the polynomial in the kinematic trajectory of the hip joint and the bare joint in the step (1) to obtain the smooth trajectory of the hip joint and the bare joint; the track is the track formed by walking the hip joint and the bare joint at each step;
according to the same method, the tracks formed by other joints on the two legs of the robot when walking can be obtained; and taking the track of the joint involved in the one-time walking process of the biped robot as a planning attitude, and driving the robot to walk according to the planning attitude through a steering engine at the joint after the planning attitude is obtained.
Further, the pre-planned real-time gait control algorithm further comprises:
step 3, judging the stability of the attitude planning
According to the planning posture in the step 2, performing stability calculation when the robot is ready to fall on the foot each time, judging whether the current robot meets the stability requirement, judging whether the stability point of the robot is in a set range according to the judgment, if the stability point is in the set range, not intervening, enabling the robot to walk according to the planning posture, otherwise, stopping walking, and recalculating the stability until the stability meets the requirement after the posture of the joints of the legs of the robot is adjusted by a controller of the robot, wherein the stability point calculation method comprises the following steps:
calculating the centroid position of the robot:
Figure RE-GDA0001923156900000031
wherein n represents the number of joints of the whole body of the robot, and miIndicating the mass of the ith joint,
Figure RE-GDA0001923156900000032
representing the centroid position of the ith joint; COM (component object model)iThe position of the mass center of the robot at the ith sampling time is shown; i.e. the ith sample is currently;
calculating a stationary point W:
W=-a+COM
wherein,
Figure RE-GDA0001923156900000033
in the above formula, tsFor sampling time, COMi,COMi-1,COMi-2Respectively the centroid position of the robot during the ith sampling, the ith-1 sampling and the ith-2 sampling, and the COM is the centroid position of the robot when the robot stands still;
and judging the position of the stable point W, if W is in a set range, considering that the robot walks stably, otherwise, judging that the robot is unstable.
Further, when the robot walks, the judgment and obstacle avoidance of the obstacle are carried out according to the following method:
step 4, installing a binocular camera on the head of the robot, wherein the binocular camera comprises a left monocular camera and a right monocular camera and is used for collecting image information in front of the robot;
calibrating a left camera and a right camera of the binocular camera respectively, and calculating plane equations of the ground under a left camera coordinate system and a right camera coordinate system respectively;
step 5, acquiring a front image of the robot through a binocular camera;
step 6, carrying out distortion and epipolar correction on the two images acquired by the left camera and the right camera, eliminating distortion, constraining the matching points on a straight line, reducing mismatching and greatly shortening matching time, and finally matching to obtain a disparity map;
step 7, calculating three-dimensional coordinates (x, y, z) of the matching points in the disparity map under a left camera coordinate system, acquiring the height and the horizontal distance of the matching points, comparing the height and the horizontal distance with a set threshold value, and judging whether an obstacle exists or not;
detecting the outline of the object by using a cvFindCon-curves outline detection function in an OpenCV visual library, and marking the outline by using a circumscribed rectangular frame of the outline;
the judgment algorithm for the obstacle is as follows: firstly, randomly extracting a plurality of white pixel points on an object in an external rectangular frame in a disparity map, calculating three-dimensional coordinates (x, y, z) of the white pixel points under a left camera coordinate system, wherein x and z respectively represent the height distance and the horizontal distance of the object relative to a left camera, y represents the left-right offset distance of the object relative to the optical center of the left camera, and then judging whether the object in the rectangular frame is an obstacle or not according to a set threshold value;
step 8, if judging that no obstacle exists in front, enabling the robot to move along the planned path; if the front part is judged to have the obstacle, the robot stops moving and parameter information of the obstacle is obtained;
and 9, planning an obstacle avoidance path.
Further, the algorithm for discriminating the obstacle specifically includes:
step 7.1, randomly extracting a plurality of white pixel points i in an external rectangular frame in the parallax image as 1, 2, 3i(xi,yi,zi) And according to the horizontal distance ziSorting by size;
step 7.2, taking the horizontal distances z of all white pixel pointsiMedian z of0And setting a threshold phi for the horizontal distance z of the white pixel point, such as the white pixel point iiSatisfies | zi-z0If the value is greater than phi, the white pixel points I are taken as abnormal points to be removed, and the residual white pixel points form a set I;
step 7.3, calculating the horizontal distance z of the white pixel points in the set IiThe average value z of the distance between the obstacle and the robot is taken as the horizontal distance of the circumscribed rectangle frame; taking x in three-dimensional coordinates of white pixel points in set IiMaximum value x ofmaxA height as the obstacle;
step 7.4, set height threshold xdAnd a horizontal distance threshold zdWhen x is satisfiedmax>xdAnd is and
Figure RE-GDA0001923156900000041
and if so, determining that the object in the circumscribed rectangular frame is an obstacle.
Further, if the front is judged to have no obstacle, the robot is made to travel according to a planned path; if it is judged that there is an obstacle in front, the robot is stopped and the parameter information of the obstacle is acquired, specifically including:
step 8.1, taking the width of an external rectangular frame of the obstacle as the width of the obstacle, and marking the width as Q, and taking the center of the external rectangular frame as the center point of the obstacle, and marking the center point as O;
step 8.2, obtaining the three-dimensional coordinate d of the O through depth detectioni(x, y, z), where y represents the offset distance L of the center point from the optical center of the left camerai(ii) a Obtaining the coordinate (x) of the leftmost white pixel point in the external rectangular frame under the coordinate system of the left cameraL,yL,zL);
Step 8.3, calculating the vertical coordinate of the center point O of the obstacle according to the width of the obstacle, namely the offset distance L of the O point relative to the optical center of the left camerao
Figure RE-GDA0001923156900000051
Step 8.4, because the binocular camera is installed on the central axis of the robot, the offset distance d of the center point O of the obstacle relative to the central axis of the robot is as follows:
Figure RE-GDA0001923156900000052
where d >0 indicates to the right, d <0 indicates to the left, and d-0 indicates to the center.
Further, the planning of the obstacle avoidance path specifically includes:
step 9.1, setting the length of the barrier as W;
9.2, when the obstacle deviates to the left relative to the robot, the robot turns right by an angle theta and moves straight by a distance X along a parallel safe distance, and when a connecting line of the center of the robot and the center of the obstacle is vertical to the planned path, the robot turns left by the angle 2 theta and moves straight by the distance X, so that the robot returns to the planned path;
for the obstacle avoidance process of the robot, the following steps are carried out:
Figure RE-GDA0001923156900000053
Figure RE-GDA0001923156900000054
in the above formula, L is the body width of the robot; theta and 2 theta are the rotation angles of the robot; d is the distance between the obstacle and the robot; y isLThe offset distance of the left boundary of the obstacle relative to the center of the robot; d is the added compensation distance to account for existing errors and safety issues.
Compared with the prior art, the invention has the following technical characteristics:
1. in the aspect of complexity of the algorithm, the required information amount is small, and the method is simple; in the aspect of timeliness of the algorithm, the invention needs a small amount of information and is low in implementation complexity, so that the processing time of the algorithm is reduced.
2. In the aspect of practical application, the algorithm can combine the practical characteristics of the robot and the terrain, the action of each step of the robot is designed in advance, so that the robot can complete corresponding action according to a plan, and the obstacle avoidance is carried out by acquiring the image information in front in real time, so that the walking success rate is ensured.
3. The algorithm has strong gait stability, simple calculation, accordance with engineering thought, strong practicability in the field of actual operation and certain guiding function to the biped robot gait algorithm field.
Drawings
FIG. 1 is a schematic flow chart of the algorithm of the present invention;
fig. 2 (a) and (b) are respectively a schematic diagram of the physical structure of the robot and a schematic diagram of the installation position of a steering engine on the robot;
FIG. 3 is a schematic diagram of the algorithm for establishing a coordinate system at the joint;
FIG. 4 is a schematic view of the structure of the joint portion;
FIG. 5 is a schematic diagram of an unsynchronized state when the robot walks forward;
FIG. 6 is a schematic view of a robot leg deflecting angle during walking;
fig. 7 is a schematic diagram of the robot foot stress.
Fig. 8 is a schematic diagram of obstacle contour detection in an obstacle avoidance algorithm;
fig. 9 is a schematic diagram of obstacle avoidance path planning of the obstacle avoidance algorithm.
Detailed Description
As shown in fig. 1, the basic idea of the present invention is to establish a biped robot kinematic coordinate system model under the control of a biped robot controller, then perform posture planning and stability judgment of the robot, finally complete gait walking, and plan the next posture in real time to ensure the stability of the robot walking. A pre-programmed real-time gait control algorithm comprising the steps of:
step 1, establishing a biped robot kinematic coordinate system model
Joints of the biped robot are provided with steering engines for driving the joints to rotate, as shown in (a) and (b) of fig. 2, the whole biped robot in the embodiment has 20 steering engines, wherein a plurality of joints are distributed on leg parts of the robot, and each leg part has 6 degrees of freedom through the driving of the steering engines, namely: the bare joint has two degrees of freedom, anteriorly and laterally, called the anterior and lateral joints of the ankle joint (15, 17 and 16, 18 in fig. 2); one anterior degree of freedom of the knee (13, 14 in fig. 2), called the anterior joint of the knee; the hip joint has three degrees of freedom including anterior, lateral and horizontal rotational degrees of freedom, referred to as the anterior, lateral and horizontal rotational joints of the hip joint (7, 9, 11 and 8, 10, 12 in fig. 2). In the upper half of the robot, the elbow joint has one degree of freedom (5, 6 in fig. 2); the shoulder joint has two degrees of freedom (1, 3 and 2, 4 in fig. 2), one degree of freedom for the neck joint (19 in fig. 2) and one degree of freedom for the head (20 in fig. 2).
The key of the biped robot is composed of movably connected rods, and the joint of two adjacent rods is called a joint, as shown in fig. 4. The rod piece is a part on the trunk of the robot, for example, the elbow of the robot is used as a joint, and the small arm and the large arm of the robot are respectively regarded as a rod piece; on the biped robot body, adjacent rod pieces are movably connected, so that the angle between the adjacent rod pieces can be changed, for example, the connecting positions of the adjacent rod pieces form a rotating shaft type structure. The steering engine is installed in every joint department for drive joint is rotatory, the steering engine can adopt servo motor for example.
The idea of the step is that a coordinate system is formed at each joint of the robot, and the transformation of coordinates on two rods at the joints is realized through the transformation of the coordinate system; for the multi-joint robot in the scheme, each joint consists of two rods, the robot can be regarded as a multi-rod system, and the relation between coordinate systems can be established by using coordinate transformation for many times, so that a kinematic coordinate system model is formed, and the specific steps are as follows:
step 1.1, numbering each joint of the robot, determining an origin of each joint for a joint i, wherein the principle of determining the origin is as follows:
when the axis of the joint i (the rotation axis of the joint) is intersected with the axis of the joint i +1, taking the intersection point as the origin of the joint i, and when the axis of the joint i is different from the axis of the joint i +1, taking the intersection point of two axes common perpendicular lines and the axis of the joint i +1 as the origin of the joint i; when the axis of the joint i is parallel to the axis of the joint i +1, the intersection point of the common perpendicular line of the axis of the joint i +1 and the axis of the joint i +2 and the axis of the joint i +1 is taken as the origin of the joint i and is marked as Oi
Step 1.2, determining each coordinate axis X of a coordinate system at the joint ii、Yi、ZiEstablishing a coordinate system at each joint, wherein the determination principle is as follows:
Zithe coordinate axis coinciding with the axis of the joint i +1, XiThe coordinate axes are along the common vertical line of the axis of the i joint and the axis of the i +1 joint and point to the i +1 joint, YiThe coordinate axes are determined according to the right-hand rule.
As shown in fig. 3, in this embodiment, the coordinate system O of the left knee joint (anterior joint) of the biped robot is used0-X0Y0Z0As a reference coordinate system, the left ankle joint coordinate system is O6-X6Y6Z6The coordinate system of the right knee joint is O12-X12Y12Z12. The latter two are virtual coordinate systems, providing assistance and reference only for the calculation results.
According to the method, a coordinate system is established at each joint of the robot.
Step 1.3, according to the coordinate systems established at each joint, two adjacent coordinate systems OiAnd Oi-1(i.e., the coordinate system at joint i and the coordinate system at joint i-1) has a difference of
Figure RE-GDA0001923156900000082
Then the adjacent coordinate system OiAnd Oi-1The transformation matrix between is:
Figure RE-GDA0001923156900000081
in the above formula, s represents a coordinate system Oi-1To the coordinate system OiC represents a coordinate system Oi-1To the coordinate system OiAngle of (a)i-1Represents XiTo Xi+1Distance of diRepresents XiTo the coordinate system OiMeasured distance of, thetaiIs represented by XiTo Xi-1The angle of rotation. Since one joint i corresponds to one rod, the kinematic relationship between adjacent rods can be found by the matrix.
The kinematic relationship between two adjacent coordinate systems is obtained through the transformation matrix, and then a left ankle joint coordinate system O is obtained6-X6Y6Z6And a reference coordinate system O0-X0Y0Z0The kinematic relationship between other joint coordinate systems and the reference coordinate system can be obtained by the same method.
In the scheme, a transformation matrix between a coordinate system at each joint and a reference coordinate system is used as a mathematical model between the joint and the joint of the reference coordinate system, and all the mathematical models form a kinematic coordinate system model of the biped robot.
Step 2, planning the postures of the biped robot
In the scheme, the pose planning of the humanoid robot is established on the established robot kinematic coordinate system model, namely, the pose planning is expressed by coordinates of each joint in the kinematic coordinate system model established in the step 1, and the method comprises the following specific implementation steps:
step 2.1, designing a walking gait process of the biped robot
In order to make the planning easy, the forward walking gait design is divided into eight stages, namely, the gravity center moves to the right (firstly, the right leg is supported), the left leg is lifted, the gravity center moves to the middle of the two legs, the gravity center moves to the left, the right leg is lifted, the right leg is put down, and the gravity center moves to the middle of the two legs, and a schematic diagram is shown in fig. 5.
The gait planning of the scheme is that the gravity center is changed by swinging the legs (raised legs) forwards, and when the legs are swung forwards, the gravity center of the body moves to the supporting legs from the centers of the legs, so that the body firstly twists to one side of the supporting legs (legs contacted with the ground) to move the gravity center, namely, the gravity center is adjusted by the lateral joints of the hip joints; in order to prevent the body from falling to one side of the swing leg caused by lifting the leg during side twisting, the side twisting and leg lifting are designed step by step in the gait design. Taking the left foot lifted up first as an example, the joint gait plan of the robot during one-time walking is as follows:
carry out the right side through the hip joint of right leg, the side joint of naked joint earlier and turn round and adjust the focus, the side is turned round and is kept in place, then lifts left leg (swing leg), specifically does: the left leg is lifted by bending the bare joint, the knee joint and the hip joint of the left leg forward; the left leg is put down to perform the opposite action, namely the left leg is twisted through the hip joint and the left side of the naked joint of the right leg, then the naked joint, the knee joint and the forward joint of the hip joint of the left leg are bent (the bending direction is opposite to that of the left leg when the left leg is lifted up), and the left foot is grounded; the right leg is lifted and landed for the same reason.
Step 2.2, planning the postures of all joints when the robot walks
The forward movement means that the legs of the robot swing forwards or successively, the lateral movement means that the legs of the robot swing leftwards or rightwards, the forward movement and the lateral movement are respectively modeled, and the forward walking planning of the robot is completed.
The forward movement is realized by the coordinated movement of four lateral joints and six forward joints (the lateral joints of the left and right bare joints and the hip joints, and the forward joints of the left and right bare joints, the knee joints and the hip joints), the gravity center of the mechanism is moved by the movement of the lateral joints, and the robot walks forwards by the coordinated movement of the forward joints of the two legs, and the specific realization is as follows:
the joint rotation angles of the two legs of the robot are defined by theta1Representing the deflection angle of the left leg from bottom to top; theta2Representing the angle of deflection of the left leg from top to bottom; theta3Representing the deflection angle of the right leg from bottom to top; theta4Representative left leg deflection angle from top to bottom, see FIG. 6; then in the forward motion, by theta1、θ2、θ3、θ4And (5) realizing motion.
Assuming that the lateral direction of the hip joint is not changed, i.e. the lateral deflection angles of the four joints (i.e. the two turning joints and the lateral joints of the hip joints of the left and right legs) are all 0, the steps of solving the equation for the forward or backward motion are as follows, taking the hip joint and the ankle joint as an example:
(1) drawing a graph of the change of the hip joint and the bare joint along with the change of time (namely a two-dimensional graph established on a coordinate system in the step 1), thereby obtaining the kinematic trajectories of the hip joint and the bare joint;
(2) comparing with a human walking track, determining key points of motion stress of hip joints and bare joints of the robot according to the stress condition in the human walking process, and then performing interpolation processing on all the key points in a cubic spline interpolation mode;
(3) respectively fitting the key points after the hip joint and the bare joint are processed with the polynomial in the kinematic trajectory of the hip joint and the bare joint in the step (1) to obtain the smooth trajectory of the hip joint and the bare joint; the trajectory is the trajectory formed by walking the hip joint and the bare joint at each step.
According to the same method, the tracks formed by other joints on the two legs of the robot when walking can be obtained; the trajectory of the joint involved in the one-time walking (the left foot is lifted to put down for one-time walking) process of the biped robot is used as a planning attitude, and after the planning attitude is obtained, the robot is driven to walk according to the planning attitude by the steering engine at the joint.
The robot can be in a certain unstable state when falling feet each time, so that stability judgment is needed, and the falling foot point of the robot is subjected to feedback regulation according to the result of the stability judgment.
Step 3, judging the stability of the attitude planning
The influence of the ground reaction force on the foot is complex, which is simplified in the scheme as an arbitrary point P, and the contribution of the ground reaction force on the foot can be equivalent to a force R and a moment M. The zero moment point is defined as the point on the ground where there is a point P such that the net moment in the direction parallel to the ground due to the inertial force (F ═ ma) and the gravitational force (G ═ mg) is zero. Fig. 7 shows an example of the distribution of the forces exerted on the foot of the robot, the loads distributed along the sole of the foot having the same direction, they are equivalent to a resultant force N, the point of action on the sole through which the resultant force N passes being called the plateau. This point is understood to be the projection of the resultant of the gravitational and inertial forces on the ground, and at this point the moment of the resultant is zero in the horizontal direction.
According to the planning posture in the step 2, performing stability calculation when the robot is ready to fall feet each time, judging whether the current robot meets the stability requirement, judging whether the stability point of the robot is in a set range or not according to the judgment, if the stability point is in the set range, enabling the robot to walk according to the planning posture, otherwise, stopping walking, and recalculating the stability until the stability meets the requirement after the posture of joints (hip joint, knee joint and naked joint) of the leg of the robot is adjusted by a controller of the robot, wherein the stability point calculation method comprises the following steps:
calculating the centroid position of the robot:
Figure RE-GDA0001923156900000101
wherein n represents the number of joints of the whole body of the robot, and n is 20 in the embodiment; m isiIndicating the mass of the ith joint,
Figure RE-GDA0001923156900000102
representing the centroid position (position in the coordinate system established in step 1) of the ith joint; COM (component object model)iThe position of the mass center of the robot at the ith sampling time is shown; i.e. the ith sample is currently;
calculating a stationary point W:
W=-a+COM
wherein,
Figure RE-GDA0001923156900000111
in the above formula, tsFor sampling time, COMi,COMi-1,COMi-2The center of mass positions of the robot during the ith sampling, the (i-1) th sampling and the (i-2) th sampling are respectively, and the COM is the center of mass position of the robot when the robot stands still.
And judging the position of the stable point W, if W is in a set range, considering that the robot walks stably, otherwise, judging that the robot is unstable. In the embodiment, the set range is in a range of-5 cm to 5cm before and after a ground projection point of the center of the foot of the robot after lifting the foot.
In conclusion, the gait planning of the biped robot is designed in advance by simulating the walking track of the human body, the complete posture of the robot is planned in advance according to the posture planning before the stability judgment, and the real-time gait control is realized.
The pre-planned real-time gait control algorithm solves the problem of normal walking of the robot, avoids the complexity and the complexity of the traditional algorithm, designs the action of each step of the robot in advance to enable the robot to complete the corresponding action according to a plan, and ensures the success rate of walking.
On the basis of the technical scheme, when the robot walks according to the planned posture, the following method is adopted to judge the obstacles and avoid the obstacles:
step 4, installing a binocular camera on the head of the robot, wherein the binocular camera comprises a left monocular camera and a right monocular camera and is used for collecting image information in front of the robot;
calibrating a left camera and a right camera of the binocular camera respectively, and calculating plane equations of the ground under a left camera coordinate system and a right camera coordinate system respectively;
step 5, acquiring a front image of the robot through a binocular camera;
step 6, carrying out distortion and epipolar correction on two images (the left camera and the right camera respectively collect one image at the same time) collected by the left camera and the right camera, eliminating distortion, constraining matching points on a straight line, reducing mismatching and greatly shortening matching time, and finally obtaining a disparity map by matching;
step 7, calculating three-dimensional coordinates (x, y, z) of the matching points in the disparity map under a left camera coordinate system, acquiring the height and the horizontal distance of the matching points, comparing the height and the horizontal distance with a set threshold value, and judging whether an obstacle exists or not;
the object outline is detected here using the cvFindCon-sources outline detection function in the OpenCV vision library and marked with its bounding rectangle, as shown in fig. 8. And calculating the three-dimensional coordinate of the object relative to the left camera coordinate system by using a binocular vision ranging principle according to the white pixel points.
The judgment algorithm for the obstacle is as follows: firstly, a plurality of white pixel points on an object in an external rectangular frame in a disparity map are randomly extracted, three-dimensional coordinates (x, y, z) of the white pixel points under a left camera coordinate system are calculated, wherein x and z respectively represent the height distance and the horizontal distance of the object relative to a left camera, y represents the left-right offset distance of the object relative to the optical center of the left camera, and then whether the object in the rectangular frame is an obstacle or not is judged according to a set threshold value.
The method comprises the following specific steps:
step 7.1, randomly extracting a plurality of white pixel points i in an external rectangular frame in the parallax image as 1, 2, 3i(xi,yi,zi) And according to the horizontal distance zi(i.e. three-dimensional sittingZ in the markiValue) size ordering;
step 7.2, because there is error in the extraction and calculation process, there may be abnormal data, so all the white pixel horizontal distances z are takeniMedian z of0And setting a threshold phi for the horizontal distance z of the white pixel point, such as the white pixel point iiSatisfies | zi-z0If the value is greater than phi, the white pixel points I are taken as abnormal points to be removed, and the residual white pixel points form a set I;
step 7.3, calculating the horizontal distance z of the white pixel points in the set IiAverage value of (2)
Figure RE-GDA0001923156900000123
The horizontal distance between the obstacle and the robot is taken as a circumscribed rectangular frame; taking x in three-dimensional coordinates of white pixel points in set IiMaximum value x ofmaxA height as the obstacle;
step 7.4, set height threshold xdAnd a horizontal distance threshold zdWhen x is satisfiedmax>xdAnd is and
Figure RE-GDA0001923156900000121
and if so, determining that the object in the circumscribed rectangular frame is an obstacle.
Step 8, if judging that no obstacle exists in front, enabling the robot to move along the planned path; if the front obstacle is judged to exist, the robot is made to stop operating, and the parameter information of the obstacle is obtained, which is specifically as follows:
step 8.1, taking the width of an external rectangular frame of the obstacle as the width of the obstacle, and marking the width as Q, and taking the center of the external rectangular frame as the center point of the obstacle, and marking the center point as O;
step 8.2, obtaining (i.e. detecting by a binocular camera) the three-dimensional coordinate d of O by depth detectioni(x, y, z), where y represents the offset distance L of the center point from the optical center of the left camerai(ii) a Obtaining the coordinate (x) of the leftmost white pixel point in the external rectangular frame under the coordinate system of the left cameraL,yL,zL);
Step 8.3, calculating the vertical coordinate of the center point O of the obstacle according to the width of the obstacle, namely the offset distance L of the O point relative to the optical center of the left camerao
Figure RE-GDA0001923156900000122
Step 8.4, because the binocular camera is installed on the central axis of the robot, the offset distance d of the center point O of the obstacle relative to the central axis of the robot is as follows:
Figure RE-GDA0001923156900000131
where d >0 indicates to the right, d <0 indicates to the left, and d-0 indicates to the center.
Step 9, planning obstacle avoidance path
Step 9.1, setting the length of the barrier as W;
in the process of detecting the obstacle, the width size of the obstacle and the direction information of the obstacle relative to the robot can be accurately obtained, but the length size of the obstacle is difficult to obtain, and the length of the obstacle is generally assumed to be W;
9.2, when the obstacle deviates to the left relative to the robot, in order to quickly bypass the obstacle, a constant safety distance is set according to the position of the obstacle and the difference of laboratory environments, the right angle theta of the robot is rotated and the robot straightly travels a distance X along the parallel safety distance, and when the connecting line of the center of the robot and the center of the obstacle is vertical to the planned path, the left angle 2 theta of the robot is rotated and the distance X is straightly traveled, so that the robot returns to the planned path; the same principle is adopted when the obstacle deviates from the right and when the obstacle is centered, the robot turns left when the obstacle deviates from the right, and the left turning or the right turning is randomly selected when the obstacle is centered, so that the repeated description is omitted; the schematic diagram is shown in FIG. 9: in the figure, L is the body width of the robot; theta and 2 theta are the rotation angles of the robot; d is the distance between the obstacle and the robot; y isLThe offset distance of the left boundary of the obstacle relative to the center of the robot; d is added to account for existing errors and safety issuesThe compensation distance added (generally, d is y)L)。
For the obstacle avoidance process of the robot, the following steps are carried out:
Figure RE-GDA0001923156900000132
Figure RE-GDA0001923156900000133
therefore, obstacle detection and obstacle avoidance path planning can be completed.

Claims (4)

1. A pre-programmed real-time gait control algorithm, comprising the steps of:
step 1, establishing a biped robot kinematic coordinate system model
The joint of the biped robot body is provided with a steering engine for driving the joint to rotate, wherein the bare joint of the leg of the robot is provided with a forward joint and a lateral joint, the knee joint is provided with a forward joint, and the hip joint is provided with a forward joint, a lateral joint and a horizontal rotating joint;
step 1.1, numbering each joint of the robot, determining an origin of each joint for a joint i, wherein the principle of determining the origin is as follows:
when the axis of the joint i is intersected with the axis of the joint i +1, taking the intersection point as the origin of the joint i, and when the axis of the joint i is different from the axis of the joint i +1, taking the intersection point of the common perpendicular line of the two axes and the axis of the joint i +1 as the origin of the joint i; when the axis of the joint i is parallel to the axis of the joint i +1, the intersection point of the common perpendicular line of the axis of the joint i +1 and the axis of the joint i +2 and the axis of the joint i +1 is taken as the origin of the joint i and is marked as Oi
Step 1.2, determining each coordinate axis X of a coordinate system at the joint ii、Yi、ZiEstablishing a coordinate system at each joint, wherein the determination principle is as follows:
Zithe coordinate axis coinciding with the axis of the joint i +1, XiThe coordinate axes are along the axis of the i-joint,The common vertical line of the axis of the i +1 joint is parallel to and points to the i +1 joint, YiThe coordinate axis is determined according to a right-hand rule;
step 1.3, according to the coordinate systems established at each joint, two adjacent coordinate systems OiAnd Oi-1Comparing, obtaining the difference between two adjacent coordinate systems, namely a transformation matrix;
obtaining a kinematic relationship between two adjacent coordinate systems through the transformation matrix, recording the coordinate system of the left foot knee joint of the biped robot as a reference coordinate system, and obtaining the kinematic relationship between other joint coordinate systems and the reference coordinate system; taking a transformation matrix between a coordinate system at each joint and a reference coordinate system as a mathematical model between the joint and the joint where the reference coordinate system is located, wherein all the mathematical models form a biped robot kinematic coordinate system model;
step 2, planning the postures of the biped robot
Step 2.1, designing a walking gait process of the biped robot
The robot changes the gravity center by swinging the front pendulum of the legs, and when the front pendulum of the legs is swung, the gravity center of the body moves to the supporting legs from the centers of the two legs, so that the body firstly twists to one side of the supporting legs to move the gravity center, namely, the gravity center is adjusted by the lateral joints of the hip joints; in order to prevent the body from falling to one side of the swing leg caused by lifting the leg during side twisting, the side twisting and leg lifting are designed step by step in gait design; the joint gait plan of the robot during one walking is as follows:
firstly, the gravity center is adjusted by twisting the right side of the hip joint of the right leg and the lateral joint of the bare joint, the lateral twisting is kept in place, and then the bare joint of the left leg, the knee joint and the forward joint of the hip joint are bent to lift the left leg; the left leg is put down to perform the opposite action, namely, the left side of the hip joint and the naked joint of the right leg is twisted, and then the naked joint, the knee joint and the anterior joint of the hip joint of the left leg are bent;
step 2.2, planning the postures of all joints when the robot walks
Firstly, assuming that the lateral direction of the hip joint does not change, the step of solving the walking track is as follows:
(1) drawing a curve graph of the change of the hip joint and the bare joint along with the change of time, thereby obtaining the kinematic trajectory of the hip joint and the bare joint;
(2) comparing with a human walking track, determining key points of motion stress of hip joints and bare joints of the robot according to the stress condition in the human walking process, and then performing interpolation processing on all the key points in a cubic spline interpolation mode;
(3) respectively fitting the key points after the hip joint and the bare joint are processed with the polynomial in the kinematic trajectory of the hip joint and the bare joint in the step (1) to obtain the smooth trajectory of the hip joint and the bare joint; the track is the track formed by walking the hip joint and the bare joint at each step;
according to the same method, the tracks formed by other joints on the two legs of the robot when walking can be obtained; taking the track of the joint involved in the one-time walking process of the biped robot as a planning attitude, and driving the robot to walk according to the planning attitude through a steering engine at the joint after the planning attitude is obtained;
step 3, judging the stability of the attitude planning
According to the planning posture in the step 2, performing stability calculation when the robot is ready to fall on the foot each time, judging whether the current robot meets the stability requirement, judging whether the stability point of the robot is in a set range according to the judgment, if so, not intervening, enabling the robot to walk according to the planning posture, otherwise, stopping walking, and recalculating the stability until the stability meets the requirement after the controller of the robot adjusts the postures of the joints of the legs of the robot; the method for calculating the stable point comprises the following steps:
calculating the centroid position of the robot:
Figure FDA0003230745520000021
wherein n represents the number of joints of the whole body of the robot, and miIndicating the mass of the ith joint,
Figure FDA0003230745520000022
representing the centroid position of the ith joint; COM (component object model)iThe position of the mass center of the robot at the ith sampling time is shown; i.e. the ith sample is currently;
calculating a stationary point W:
W=-a+COM
wherein,
Figure FDA0003230745520000031
in the above formula, tsFor sampling time, COMi,COMi-1,COMi-2Respectively the centroid position of the robot during the ith sampling, the ith-1 sampling and the ith-2 sampling, and the COM is the centroid position of the robot when the robot stands still;
judging the position of a stable point W, if W is in a set range, considering that the robot walks stably, otherwise, judging that the robot is unstable;
when the robot walks, the judgment and obstacle avoidance of the obstacle are carried out according to the following method:
step 4, installing a binocular camera on the head of the robot, wherein the binocular camera comprises a left monocular camera and a right monocular camera and is used for collecting image information in front of the robot;
calibrating a left camera and a right camera of the binocular camera respectively, and calculating plane equations of the ground under a left camera coordinate system and a right camera coordinate system respectively;
step 5, acquiring a front image of the robot through a binocular camera;
step 6, carrying out distortion and epipolar correction on the two images acquired by the left camera and the right camera, eliminating distortion, constraining the matching points on a straight line, reducing mismatching and greatly shortening matching time, and finally matching to obtain a disparity map;
step 7, calculating three-dimensional coordinates (x, y, z) of the matching points in the disparity map under a left camera coordinate system, acquiring the height and the horizontal distance of the matching points, comparing the height and the horizontal distance with a set threshold value, and judging whether an obstacle exists or not;
detecting the outline of the object by using a cvFindCon-curves outline detection function in an OpenCV visual library, and marking the outline by using a circumscribed rectangular frame of the outline;
the judgment algorithm for the obstacle is as follows: firstly, randomly extracting a plurality of white pixel points on an object in an external rectangular frame in a disparity map, calculating three-dimensional coordinates (x, y, z) of the white pixel points under a left camera coordinate system, wherein x and z respectively represent the height distance and the horizontal distance of the object relative to a left camera, y represents the left-right offset distance of the object relative to the optical center of the left camera, and then judging whether the object in the rectangular frame is an obstacle or not according to a set threshold value;
step 8, if judging that no obstacle exists in front, enabling the robot to move along the planned path; if the front part is judged to have the obstacle, the robot stops moving and parameter information of the obstacle is obtained;
and 9, planning an obstacle avoidance path.
2. The pre-programmed real-time gait control algorithm of claim 1, wherein the obstacle discrimination algorithm specifically comprises:
step 7.1, randomly extracting a plurality of white pixel points i in an external rectangular frame in the parallax image as 1, 2, 3i(xi,yi,zi) And according to the horizontal distance ziSorting by size;
step 7.2, taking the horizontal distances z of all white pixel pointsiMedian z of0And setting a threshold phi for the horizontal distance z of the white pixel point, such as the white pixel point iiSatisfies | zi-z0If the value is greater than phi, the white pixel points I are taken as abnormal points to be removed, and the residual white pixel points form a set I;
step 7.3, calculating the horizontal distance z of the white pixel points in the set IiAverage value of (2)
Figure FDA0003230745520000045
The horizontal distance between the obstacle and the robot is taken as a circumscribed rectangular frame; taking x in three-dimensional coordinates of white pixel points in set IiMaximum value x ofmaxA height as the obstacle;
step 7.4, set height threshold xdAnd a horizontal distance threshold zdWhen x is satisfiedmax>xdAnd is and
Figure FDA0003230745520000041
Figure FDA0003230745520000042
and if so, determining that the object in the circumscribed rectangular frame is an obstacle.
3. The pre-planned real-time gait control algorithm of claim 1, wherein if it is determined that there is no obstacle in front, the robot is caused to follow the planned path; if it is judged that there is an obstacle in front, the robot is stopped and the parameter information of the obstacle is acquired, specifically including:
step 8.1, taking the width of an external rectangular frame of the obstacle as the width of the obstacle, and marking the width as Q, and taking the center of the external rectangular frame as the center point of the obstacle, and marking the center point as O;
step 8.2, obtaining the three-dimensional coordinate d of the O through depth detectioni(x, y, z), where y represents the offset distance L of the center point from the optical center of the left camerai(ii) a Obtaining the coordinate (x) of the leftmost white pixel point in the external rectangular frame under the coordinate system of the left cameraL,yL,zL);
Step 8.3, calculating the vertical coordinate of the center point O of the obstacle according to the width of the obstacle, namely the offset distance L of the O point relative to the optical center of the left camerao
Figure FDA0003230745520000043
Step 8.4, because the binocular camera is installed on the central axis of the robot, the offset distance d of the center point O of the obstacle relative to the central axis of the robot is as follows:
Figure FDA0003230745520000044
where d >0 indicates to the right, d <0 indicates to the left, and d-0 indicates to the center.
4. The pre-planned real-time gait control algorithm of claim 1, wherein the planning of obstacle avoidance paths specifically comprises:
step 9.1, setting the length of the barrier as W;
9.2, when the obstacle deviates to the left relative to the robot, the robot turns right by an angle theta and moves straight by a distance X along a parallel safe distance, and when a connecting line of the center of the robot and the center of the obstacle is vertical to the planned path, the robot turns left by the angle 2 theta and moves straight by the distance X, so that the robot returns to the planned path;
for the obstacle avoidance process of the robot, the following steps are carried out:
Figure FDA0003230745520000051
Figure FDA0003230745520000052
in the above formula, L is the body width of the robot; theta and 2 theta are the rotation angles of the robot; d is the distance between the obstacle and the robot; y isLThe offset distance of the left boundary of the obstacle relative to the center of the robot; d is the added compensation distance to account for existing errors and safety issues.
CN201811239754.7A 2018-10-23 2018-10-23 Preplanned real-time gait control algorithm Expired - Fee Related CN109333534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811239754.7A CN109333534B (en) 2018-10-23 2018-10-23 Preplanned real-time gait control algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811239754.7A CN109333534B (en) 2018-10-23 2018-10-23 Preplanned real-time gait control algorithm

Publications (2)

Publication Number Publication Date
CN109333534A CN109333534A (en) 2019-02-15
CN109333534B true CN109333534B (en) 2021-12-17

Family

ID=65311403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811239754.7A Expired - Fee Related CN109333534B (en) 2018-10-23 2018-10-23 Preplanned real-time gait control algorithm

Country Status (1)

Country Link
CN (1) CN109333534B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110371213A (en) * 2019-07-12 2019-10-25 沈阳城市学院 A kind of biped robot's walking planning and control method
CN110834685B (en) * 2019-09-29 2021-05-28 中国北方车辆研究所 Method for quadruped robot to dynamically cross concave obstacle
CN113050616B (en) * 2019-12-27 2024-09-06 深圳市优必选科技股份有限公司 Control method for walking of biped robot and biped robot
CN111515954B (en) * 2020-05-06 2021-08-20 大连理工大学 Method for generating high-quality motion path of mechanical arm
CN112180958B (en) * 2020-09-23 2022-08-19 北航歌尔(潍坊)智能机器人有限公司 Robot and motion coordination method, control device and readable storage medium thereof
CN112220650B (en) * 2020-12-09 2021-04-16 南京伟思医疗科技股份有限公司 Online step generation control system for exoskeleton robot contralateral training
CN112731953B (en) * 2020-12-24 2024-07-19 深圳市优必选科技股份有限公司 Robot control method and device, computer readable storage medium and robot
CN112731952B (en) * 2020-12-24 2022-03-01 深圳市优必选科技股份有限公司 Robot centroid planning method and device, readable storage medium and robot
CN114700948B (en) * 2022-04-20 2023-07-18 中国科学技术大学 Lower limb exoskeleton robot control system based on divergent motion component
CN114954723B (en) * 2022-04-22 2024-08-09 上海清宝引擎机器人有限公司 Humanoid robot

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4184679B2 (en) * 2001-08-01 2008-11-19 本田技研工業株式会社 Method for estimating floor reaction force of bipedal mobile body and method for estimating joint moment of bipedal mobile body
JP3646169B2 (en) * 2002-05-07 2005-05-11 独立行政法人産業技術総合研究所 Walking controller for legged robot
JP4810880B2 (en) * 2005-05-10 2011-11-09 トヨタ自動車株式会社 Robot and its control method
KR101687631B1 (en) * 2010-01-18 2016-12-20 삼성전자주식회사 Walking control apparatus of robot and method for controlling the same
JP5859036B2 (en) * 2014-02-04 2016-02-10 本田技研工業株式会社 robot
CN105608309B (en) * 2015-12-11 2019-08-02 杭州南江机器人股份有限公司 A kind of biped robot's walking planning and control method
CN105938499A (en) * 2016-01-08 2016-09-14 浙江大学 Coordinate system establishment method of 3D biped robot
CN105965506A (en) * 2016-05-16 2016-09-28 北京格分维科技有限公司 Humanoid biped robot walking posture control method based on genetic algorithm
CN106176149A (en) * 2016-09-08 2016-12-07 电子科技大学 A kind of ectoskeleton gait analysis system based on multi-sensor fusion and method
CN108009680A (en) * 2017-11-30 2018-05-08 航天科工智能机器人有限责任公司 Humanoid robot gait's planing method based on multi-objective particle swarm algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
仿人机器人的步态规划和步行控制研究;付根平;《中国博士学位论文全文数据库信息科技辑》;20130915;第19-22、32-60页 *
基于双目视觉的AGV障碍物检测与避障;王铮等;《计算机集成制造系统》;20180730;第400-409页 *

Also Published As

Publication number Publication date
CN109333534A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109333534B (en) Preplanned real-time gait control algorithm
CN109333506B (en) Humanoid intelligent robot system
CN113696186B (en) Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition
JP4479372B2 (en) Environmental map creation method, environmental map creation device, and mobile robot device
CN107901041B (en) Robot vision servo control method based on image mixing moment
Nishiwaki et al. Autonomous navigation of a humanoid robot over unknown rough terrain using a laser range sensor
US8873831B2 (en) Walking robot and simultaneous localization and mapping method thereof
US9043146B2 (en) Systems and methods for tracking location of movable target object
US8019145B2 (en) Legged locomotion robot
CN108563220A (en) The motion planning of apery Soccer robot
JP6826069B2 (en) Robot motion teaching device, robot system and robot control device
Oßwald et al. Autonomous climbing of spiral staircases with humanoids
KR102094004B1 (en) Method for controlling a table tennis robot and a system therefor
US20220390954A1 (en) Topology Processing for Waypoint-based Navigation Maps
CN114378827B (en) Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN113172659B (en) Flexible robot arm shape measuring method and system based on equivalent center point identification
CN113362396A (en) Mobile robot 3D hand-eye calibration method and device
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN113751981B (en) Space high-precision assembling method and system based on binocular vision servo
Gratal et al. Visual servoing on unknown objects
CN117840995A (en) Automatic wall building method and system based on two-stage visual servo
CN115781666A (en) Control method for robot whole body simulation system
Gouda et al. Complex motion planning for nao humanoid robot
US20220388174A1 (en) Autonomous and teleoperated sensor pointing on a mobile robot
CN117921682A (en) Welding robot rapid teaching device and method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211217

CF01 Termination of patent right due to non-payment of annual fee