CN107943059B - Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof - Google Patents
Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof Download PDFInfo
- Publication number
- CN107943059B CN107943059B CN201711469283.4A CN201711469283A CN107943059B CN 107943059 B CN107943059 B CN 107943059B CN 201711469283 A CN201711469283 A CN 201711469283A CN 107943059 B CN107943059 B CN 107943059B
- Authority
- CN
- China
- Prior art keywords
- robot
- data
- depth
- foot
- joint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 16
- 230000000007 visual effect Effects 0.000 title claims abstract description 13
- 230000007246 mechanism Effects 0.000 claims description 25
- 210000001503 joint Anatomy 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 8
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 238000005265 energy consumption Methods 0.000 claims description 3
- 230000005021 gait Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a heavy-duty multi-legged robot based on depth visual navigation and a motion planning method thereof. The robot uses the linear unit as the driving device, and compared with the articulated multi-legged robot, the robot can bear a large weight, so that the problems of power and flow consumption fluctuation and the like of the articulated multi-legged robot are solved; the forward direction of the robot and the full-terrain fast running can be changed fast, and the robot has high practical value.
Description
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a heavy-duty multi-legged robot based on depth visual navigation and a motion planning method thereof.
Background
In the prior art, the multi-legged robot is generally an articulated robot, which has several common problems as follows: (1) The joints are placed laterally, so that the carrying capacity is weak, and heavy-load running cannot be performed; (2) The running control difficulty of the complex space is high, and the rotation angle is difficult to solve by inverse solution of kinematics; (3) the advancing speed is slower, and the requirement on the motor is higher; (4) Without feedback from each actuator, only open loop control can be performed.
For the above reasons, a multi-legged robot capable of carrying out heavy load is required to be designed, and the robot can greatly improve the working efficiency and the self-adaptability; meanwhile, the visual equipment with lower price is used for carrying out image processing to carry out positioning navigation, so that the practical value of the robot is improved.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, the invention provides a heavy-duty multi-legged robot based on depth vision navigation and a motion planning method thereof, provides the heavy-duty multi-legged robot, uses machine vision to complete the positioning and navigation of the robot, can improve the adaptability of the robot through the change of a software end, and solves the problems in the prior art.
The technical scheme is as follows: in order to achieve the above purpose, the invention adopts the following technical scheme:
heavy-duty multi-legged robot based on depth visual navigation, its characterized in that: the multi-foot robot body comprises a robot foot system, a robot load platform and a depth vision device which are sequentially connected from bottom to top; wherein,
the robot foot system consists of a plurality of supporting legs, each supporting leg comprises a connecting mechanism, a base joint, a swivel, a leg joint and a supporting foot, wherein the connecting mechanism is connected with a robot load platform, the top of the base joint is connected with the connecting mechanism, the bottom of the base joint is connected with the center of the swivel through a rotating motor, the tail end of the swivel is connected with the leg joint, and the supporting foot is arranged at the tail end of each leg joint;
the robot loading platform is used for placing weights required to be carried by the robot, and the upper surface of the robot loading platform is symmetrically provided with a tubular linear motor with a transversely installed built-in digital displacement sensor along the center;
the depth vision device consists of an L-shaped supporting frame and a binocular vision camera, one end of the L-shaped supporting frame is horizontally arranged, the tail end of the L-shaped supporting frame is connected with the binocular vision camera, the other end of the L-shaped supporting frame is vertically arranged, and the tail end of the L-shaped supporting frame is installed in the center of a robot load platform through a motor.
The connecting mechanism consists of a screw rod guide rail and a sliding block, the shell of the screw rod guide rail is connected with the robot load platform, and the sliding block is connected with the base section; the bottom of the base joint is connected with the center of the rotary joint through a rotary motor, and the upper part of the rotary joint is provided with a diagonal structure which is connected with the base joint through a bearing; the leg joint uses a tubular linear motor with a built-in digital displacement sensor as an actuating mechanism, the tubular linear motor is fixed at two ends of the swivel joint, the tail end of the leg joint is connected with supporting feet, and piezoelectric sensors are arranged at the bottoms of the supporting feet and used for feeding back the gesture of the robot.
The robot foot system consists of six supporting legs, and is uniformly distributed along the center of the robot load platform in the circumferential direction. The six-foot mechanism has two redundant supporting points relative to the four-foot mechanism, so that the stability is higher, and the modeling difficulty and the control difficulty relative to the eight-foot mechanism and the eight-foot mechanism are low.
The travel control method of the heavy-duty multi-legged robot based on the depth visual navigation comprises the following steps:
step one: calibrating the binocular vision camera, and solving internal parameters and external parameters of the binocular vision camera;
step two: according to feedback of a motor at the bottom of the depth vision device, the advancing direction of the robot is obtained, corresponding depth data and RGB data in the vision field of the binocular vision camera in the advancing direction of the robot are collected, denoising processing is carried out on the obtained depth data, and smooth depth data are obtained; then deriving the depth data to obtain smoothness data reflecting the smoothness of the depth data;
step three: converting depth data and smoothness data into three-dimensional point cloud data according to internal parameters of the binocular vision camera, solving a three-dimensional point cloud model of the smoothness data, performing octree spatial index on the three-dimensional point cloud model of the smoothness data, and performing data reduction on the processed point cloud model by using a bounding box method;
step four: determining the range which can be reached by each support foot after the robot walks next according to the current pose of the multi-foot robot, converting the range into a coordinate system of a camera, and selecting a corresponding region in a three-dimensional point cloud model of smoothness data as a further selection interval;
step five: m represents the size of the smoothness data, i.e. the depth difference between the point in the depth data and the surrounding points, and the median value of all points in the interval is selected as the threshold M of the initial smoothness data 0 Selecting the minimum value of all points in the interval as M MIN From M MIN To M 0 Continuously searching the pose of the next step which can be generated, and carrying out weighted calculation, wherein the weight of the change of the robot pose is w 1 The weight of the position change of the robot is w 2 The weight of the energy consumption required in the robot changing process is w 3 The optimal solution should be calculated according to the following equation, result=w 1 Gesture-w 2 Position+w 3 Energy, converging to obtain the optimal position of the next step;
step six: and (3) performing gait control on the robot to reach a designated position, completing the running of the robot in one movement period, and then repeating the second step to the fifth step to complete the running navigation of the robot.
The beneficial effects of the invention are as follows: compared with the traditional multi-legged robot, the heavy-duty multi-legged robot based on the depth visual navigation has the following advantages: compared with the articulated multi-legged robot, the robot can bear large weight (2) solves the problems of power and flow consumption fluctuation and the like of the articulated multi-legged robot (3) and can realize rapid change of the advancing direction of the robot and rapid advancing of all-terrain, and has high practical value.
Drawings
FIG. 1 is an illustrative schematic of an embodiment of the present invention;
FIG. 2 is a partial schematic view of an embodiment of the present invention;
FIG. 3 is a schematic view of the structure of the connecting mechanism in the embodiment;
FIG. 4 is a schematic diagram of a foot piezoelectric sensor in an embodiment;
in the figure: the robot comprises a 1-robot foot system, a 2-robot load platform, a 3-depth vision device, 4-supporting legs, a 5-connecting mechanism, a 6-base section, a 7-swivel, 8-leg sections, 9-supporting feet, a 10-rotating motor, an 11-tubular linear motor, a 12- 'L' -shaped supporting frame, a 13-binocular vision camera, a 14-end motor, a 15-screw guide rail, a 16-slide block, a 17-inclined-pull structure, 18-bearings and a 19-tubular linear motor.
Detailed Description
The invention discloses a heavy-duty multi-legged robot based on depth visual navigation and a motion planning method thereof.
The invention will be further described with reference to the accompanying drawings.
Examples
As shown in fig. 1 and 2, the heavy-duty multi-legged robot is characterized in that: the multi-foot robot body comprises a robot foot system 1, a robot load platform 2 and a depth vision device 3 which are sequentially connected from bottom to top; wherein,
the robot foot system 1 is composed of a plurality of supporting legs 4, each supporting leg 4 comprises a connecting mechanism 5, a base joint 6, a rotary joint 7, a leg joint 8 and a supporting foot 9, wherein the connecting mechanism 5 is connected with a robot load platform 2, the top of the base joint 6 is connected with the connecting mechanism 5, the bottom of the base joint 6 is connected with the center of the rotary joint 7 through a rotary motor 10, the tail end of the rotary joint 7 is connected with the leg joint 8, and the supporting foot 9 is arranged at the tail end of each leg joint 8;
the robot loading platform 2 is used for placing weights to be carried by a robot, and a tubular linear motor 11 with built-in digital displacement sensors which are transversely installed is symmetrically arranged on the upper surface of the robot loading platform 2 along the center;
the depth vision device 3 is composed of an L-shaped supporting frame 12 and a binocular vision camera 13, one end of the L-shaped supporting frame 12 is horizontally arranged, the tail end of the L-shaped supporting frame is connected with the binocular vision camera 13, the other end of the L-shaped supporting frame is vertically arranged, and the tail end of the L-shaped supporting frame is installed in the center of the robot load platform 2 through a motor 14.
As shown in fig. 3, the connecting mechanism 5 is composed of a screw guide rail 15 and a sliding block 16, the outer shell of the screw guide rail 15 is connected with the robot load platform 2, and the sliding block 16 is connected with the base section 6; the bottom of the base joint 6 is connected with the center of the rotary joint 7 through a rotary motor 10, and a diagonal structure 17 is arranged at the upper part of the rotary joint 7 and is connected with the base joint 6 through a bearing 18; the leg joint 8 uses a tubular linear motor 19 with a built-in digital displacement sensor as an actuating mechanism, the tubular linear motor is fixed at two ends of the swivel 7, the tail end of the leg joint 8 is connected with the supporting feet 9, and piezoelectric sensors are arranged at the bottom of each supporting foot 9 and used for feeding back the gesture of the robot, as shown in fig. 4.
The robot foot system 1 consists of six supporting feet 9, and is uniformly distributed along the center of the robot load platform 2 in the circumferential direction. The six-foot mechanism has two redundant supporting points relative to the four-foot mechanism, so that the stability is higher, and the modeling difficulty and the control difficulty relative to the eight-foot mechanism and the eight-foot mechanism are low.
The travel control method of the heavy-duty multi-legged robot based on the depth visual navigation comprises the following steps:
step one: calibrating the binocular vision camera 13, and solving internal parameters and external parameters of the binocular vision camera 13;
step two: according to feedback of a motor at the bottom of the depth vision device 3, the advancing direction of the robot is obtained, corresponding depth data and RGB data in the visual field of the binocular vision camera 13 in the advancing direction of the robot are collected, denoising is carried out on the obtained depth data, and smooth depth data are obtained; then deriving the depth data to obtain smoothness data reflecting the smoothness of the depth data;
step three: converting depth data and smoothness data into three-dimensional point cloud data according to internal parameters of the binocular vision camera 13, solving a three-dimensional point cloud model of the smoothness data, performing octree spatial index on the three-dimensional point cloud model of the smoothness data, and performing data reduction on the processed point cloud model by using a bounding box method;
step four: determining the range which can be reached by each support foot after the robot walks next according to the current pose of the multi-foot robot, converting the range into a coordinate system of a camera, and selecting a corresponding region in a three-dimensional point cloud model of smoothness data as a further selection interval;
step five: m represents the size of the smoothness data, i.e. the depth difference between the point in the depth data and the surrounding points, and the median value of all points in the interval is selected as the threshold M of the initial smoothness data 0 Selecting the minimum value of all points in the interval as M MIN From M MIN To M 0 Continuously searching the pose of the next step which can be generated, and carrying out weighted calculation, wherein the weight of the change of the robot pose is w 1 The weight of the position change of the robot is w 2 The weight of the energy consumption required in the robot changing process is w 3 The optimal solution should be calculated according to the following equation, result=w 1 Gesture-w 2 Position+w 3 Energy, converging to obtain the optimal position of the next step;
step six: and (3) performing gait control on the robot to reach a designated position, completing the running of the robot in one movement period, and then repeating the second step to the fifth step to complete the running navigation of the robot.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Claims (2)
1. Heavy-duty multi-legged robot based on depth visual navigation, characterized in that: the multi-foot robot body comprises a robot foot system, a robot load platform and a depth vision device which are sequentially connected from bottom to top; wherein,
the robot foot system consists of a plurality of supporting legs, each supporting leg comprises a connecting mechanism, a base joint, a swivel, a leg joint and a supporting foot, wherein the connecting mechanism is connected with a robot load platform, the top of the base joint is connected with the connecting mechanism, the bottom of the base joint is connected with the center of the swivel through a rotating motor, the tail end of the swivel is connected with the leg joint, and the supporting foot is arranged at the tail end of each leg joint;
the robot loading platform is used for placing weights required to be carried by the robot, and the upper surface of the robot loading platform is symmetrically provided with a tubular linear motor with a transversely installed built-in digital displacement sensor along the center;
the depth vision device consists of an L-shaped supporting frame and a binocular vision camera, wherein one end of the L-shaped supporting frame is horizontally arranged, the tail end of the L-shaped supporting frame is connected with the binocular vision camera, the other end of the L-shaped supporting frame is vertically arranged, and the tail end of the L-shaped supporting frame is arranged in the center of a robot load platform through a motor;
the robot foot system consists of six supporting legs, and is uniformly distributed along the center of the robot load platform in the circumferential direction;
the connecting mechanism consists of a screw rod guide rail and a sliding block, the shell of the screw rod guide rail is connected with the robot load platform, and the sliding block is connected with the base section; the bottom of the base joint is connected with the center of the rotary joint through a rotary motor, and the upper part of the rotary joint is provided with a diagonal structure which is connected with the base joint through a bearing; the leg joint uses a tubular linear motor with a built-in digital displacement sensor as an actuating mechanism, the tubular linear motor is fixed at two ends of the swivel joint, the tail end of the leg joint is connected with supporting feet, and piezoelectric sensors are arranged at the bottoms of the supporting feet and used for feeding back the gesture of the robot.
2. The motion planning method of the heavy-duty multi-legged robot based on depth visual navigation according to claim 1, wherein: the travel control method comprises the following steps:
step one: calibrating the binocular vision camera, and solving internal parameters and external parameters of the binocular vision camera;
step two: according to feedback of a motor at the bottom of the depth vision device, the advancing direction of the robot is obtained, corresponding depth data and RGB data in the vision field of the binocular vision camera in the advancing direction of the robot are collected, denoising processing is carried out on the obtained depth data, and smooth depth data are obtained; then deriving the depth data to obtain smoothness data reflecting the smoothness of the depth data;
step three: converting depth data and smoothness data into three-dimensional point cloud data according to internal parameters of the binocular vision camera, solving a three-dimensional point cloud model of the smoothness data, performing octree spatial index on the three-dimensional point cloud model of the smoothness data, and performing data reduction on the processed point cloud model by using a bounding box method;
step four: determining the range which can be reached by each support foot after the robot walks next according to the current pose of the multi-foot robot, converting the range into a coordinate system of a camera, and selecting a corresponding region in a three-dimensional point cloud model of smoothness data as a further selection interval;
step five: m represents the size of the smoothness data, i.e. the depth difference between the point in the depth data and the surrounding points, and the median value of all points in the interval is selected as the threshold M of the initial smoothness data 0 Selecting the minimum value of all points in the interval as M MIN From M MIN To M 0 Continuously searching the pose of the next step which can be generated, and carrying out weighted calculation, wherein the weight of the change of the robot pose is w 1 The weight of the position change of the robot is w 2 The weight of the energy consumption required in the robot changing process is w 3 The optimal solution should be calculated according to the following equation, result=w 1 Gesture-w 2 Position+w 3 Energy, converging to obtain the optimal position of the next step;
step six: and (3) performing gait control on the robot to reach a designated position, completing the running of the robot in one movement period, and then repeating the second step to the fifth step to complete the running navigation of the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711469283.4A CN107943059B (en) | 2017-12-29 | 2017-12-29 | Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711469283.4A CN107943059B (en) | 2017-12-29 | 2017-12-29 | Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107943059A CN107943059A (en) | 2018-04-20 |
CN107943059B true CN107943059B (en) | 2024-03-15 |
Family
ID=61936845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711469283.4A Active CN107943059B (en) | 2017-12-29 | 2017-12-29 | Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107943059B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101570220A (en) * | 2009-06-04 | 2009-11-04 | 哈尔滨工程大学 | Reversible and amphibious multi-legged robot with variable postures |
CN101817452A (en) * | 2010-04-02 | 2010-09-01 | 大连佳林设备制造有限公司 | Packing and palletizing robot |
KR20100100120A (en) * | 2009-03-05 | 2010-09-15 | 이승철 | Gaming method using multi-leged walking robot |
CN101850797A (en) * | 2010-01-07 | 2010-10-06 | 郑州轻工业学院 | Modularized multiped walking robot capable of realizing functional shift between hands and feet |
CN102749919A (en) * | 2012-06-15 | 2012-10-24 | 华中科技大学 | Balance control method of multi-leg robot |
WO2013112907A1 (en) * | 2012-01-25 | 2013-08-01 | Adept Technology, Inc. | Autonomous mobile robot for handling job assignments in a physical environment inhabited by stationary and non-stationary obstacles |
CN104443105A (en) * | 2014-10-29 | 2015-03-25 | 西南大学 | Low-energy-loss six-foot robot |
CN105172933A (en) * | 2015-08-18 | 2015-12-23 | 长安大学 | Spider-imitating multi-foot robot platform |
CN205059786U (en) * | 2015-08-18 | 2016-03-02 | 长安大学 | Polypody robot platform with visual system |
CN106901916A (en) * | 2017-03-13 | 2017-06-30 | 上海大学 | The walked seat unit and its control system of a kind of use EEG signals control |
US9701016B1 (en) * | 2015-08-10 | 2017-07-11 | X Development Llc | Detection of movable ground areas of a robot's environment using a transducer array |
CN206781911U (en) * | 2017-04-07 | 2017-12-22 | 华南理工大学广州学院 | A kind of Hexapod Robot |
WO2017219315A1 (en) * | 2016-06-23 | 2017-12-28 | 深圳市大疆创新科技有限公司 | Multi-legged robot |
CN207650650U (en) * | 2017-12-29 | 2018-07-24 | 南京工程学院 | Heavily loaded multi-foot robot based on deep vision navigation |
-
2017
- 2017-12-29 CN CN201711469283.4A patent/CN107943059B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100100120A (en) * | 2009-03-05 | 2010-09-15 | 이승철 | Gaming method using multi-leged walking robot |
CN101570220A (en) * | 2009-06-04 | 2009-11-04 | 哈尔滨工程大学 | Reversible and amphibious multi-legged robot with variable postures |
CN101850797A (en) * | 2010-01-07 | 2010-10-06 | 郑州轻工业学院 | Modularized multiped walking robot capable of realizing functional shift between hands and feet |
CN101817452A (en) * | 2010-04-02 | 2010-09-01 | 大连佳林设备制造有限公司 | Packing and palletizing robot |
WO2013112907A1 (en) * | 2012-01-25 | 2013-08-01 | Adept Technology, Inc. | Autonomous mobile robot for handling job assignments in a physical environment inhabited by stationary and non-stationary obstacles |
CN102749919A (en) * | 2012-06-15 | 2012-10-24 | 华中科技大学 | Balance control method of multi-leg robot |
CN104443105A (en) * | 2014-10-29 | 2015-03-25 | 西南大学 | Low-energy-loss six-foot robot |
US9701016B1 (en) * | 2015-08-10 | 2017-07-11 | X Development Llc | Detection of movable ground areas of a robot's environment using a transducer array |
CN105172933A (en) * | 2015-08-18 | 2015-12-23 | 长安大学 | Spider-imitating multi-foot robot platform |
CN205059786U (en) * | 2015-08-18 | 2016-03-02 | 长安大学 | Polypody robot platform with visual system |
WO2017219315A1 (en) * | 2016-06-23 | 2017-12-28 | 深圳市大疆创新科技有限公司 | Multi-legged robot |
CN106901916A (en) * | 2017-03-13 | 2017-06-30 | 上海大学 | The walked seat unit and its control system of a kind of use EEG signals control |
CN206781911U (en) * | 2017-04-07 | 2017-12-22 | 华南理工大学广州学院 | A kind of Hexapod Robot |
CN207650650U (en) * | 2017-12-29 | 2018-07-24 | 南京工程学院 | Heavily loaded multi-foot robot based on deep vision navigation |
Also Published As
Publication number | Publication date |
---|---|
CN107943059A (en) | 2018-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111152182B (en) | Multi-arm robot for tunnel lining detection and disease diagnosis in operation period | |
CN110815180B (en) | Six-degree-of-freedom parallel robot motion analysis modeling and quick solving method | |
CN108333931B (en) | Rugged terrain-oriented four-legged robot double-layer structure gait planning method | |
CN112248835B (en) | Charging mechanical arm control method and system | |
CN107016667A (en) | A kind of utilization binocular vision obtains the device of large parts three-dimensional point cloud | |
CN204397136U (en) | Portable arc welding machine device people | |
CN111766885A (en) | Static gait planning method of quadruped robot | |
CN1730248A (en) | Reverse engineering robot system | |
CN205405613U (en) | Robot is rebuild to indoor three -dimensional scene of building | |
Chen et al. | Real-time gait planning method for six-legged robots to optimize the performances of terrain adaptability and walking speed | |
US10196104B1 (en) | Terrain Evaluation for robot locomotion | |
EP4144487A1 (en) | Legged robot motion control method and apparatus, device, medium and program | |
CN110076754A (en) | A kind of mobile parallel connection mechanism and its control method of multi-locomotion mode | |
CN111746824A (en) | Buffer/walking integrated hexapod lander and gait control method thereof | |
CN207650650U (en) | Heavily loaded multi-foot robot based on deep vision navigation | |
Tavakoli et al. | Cooperative multi-agent mapping of three-dimensional structures for pipeline inspection applications | |
CN210634664U (en) | Obstacle-avoidable spider hexapod robot | |
Lin et al. | An automatic tunnel shotcrete robot | |
CN107943059B (en) | Heavy-load multi-foot robot based on depth visual navigation and motion planning method thereof | |
KR20100093833A (en) | Method for generating optimal trajectory of a biped robot for walking down a staircase | |
Shao et al. | Obstacle crossing with stereo vision for a quadruped robot | |
CN104932499A (en) | four-footed walking robot leg mechanism based on Solidworks | |
Chen et al. | Obstacle avoidance system design of intelligent sweeping robot based on improved genetic algorithm | |
Wang et al. | Vision-based Terrain Perception of Quadruped Robots in Complex Environments | |
Athar et al. | Whole-body motion planning for humanoid robots with heuristic search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |