CN114326736A - Following path planning method and foot type robot - Google Patents

Following path planning method and foot type robot Download PDF

Info

Publication number
CN114326736A
CN114326736A CN202111642217.9A CN202111642217A CN114326736A CN 114326736 A CN114326736 A CN 114326736A CN 202111642217 A CN202111642217 A CN 202111642217A CN 114326736 A CN114326736 A CN 114326736A
Authority
CN
China
Prior art keywords
robot
target object
pose
planning
legged robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111642217.9A
Other languages
Chinese (zh)
Inventor
郑大可
陈盛军
肖志光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202111642217.9A priority Critical patent/CN114326736A/en
Publication of CN114326736A publication Critical patent/CN114326736A/en
Pending legal-status Critical Current

Links

Images

Abstract

A following path planning method of a legged robot comprises the following steps: acquiring pose information of a target object; forming a virtual barrier around the target object; identifying physical obstacles within a planning map; and planning a pose following path of the foot type robot in a planning map according to the pose information, the virtual barrier and the entity barrier. The virtual barrier has the same posture as the target object, and the target object is wrapped in the virtual barrier, so that the same posture as the target object can be kept when the foot type robot is close to the target object, and the smoothness of a posture following path is optimized. The invention also provides a foot type robot.

Description

Following path planning method and foot type robot
Technical Field
The invention relates to the field of robot safety protection, in particular to a following path planning method of a foot type robot in a motion process and the foot type robot.
Background
With the continuous development of science and technology, the foot robot is widely applied to the life of people. Among other things, the legged robot may enable following of a target object to achieve a specific goal, such as following a running human to prevent human from becoming sick or accidental, or following a pet at home to enable real-time observation of the pet. The foot-type mobile robot with the following function needs to dynamically plan a following path according to a plurality of conditions such as the pose of a target object, the surrounding environment, an expected task and the like, and can be used for following the target object. Wherein the follow path is typically calculated based on Hybrid a-programming algorithm. The following path planned based on the Hybrid A-based planning algorithm can only realize the following of the path, and does not consider the pose of the legged robot, so that the poses of the legged robot and the target object are different when the legged robot approaches the target object, and the appointed exploration or obstacle avoidance task cannot be executed.
Disclosure of Invention
The invention mainly aims to provide a following path planning method and a foot type robot, and aims to solve the problem that in the prior art, the foot type robot has different postures when approaching a following target object.
A following path planning method of a legged robot; the foot robot is used for following a target object; the following path planning method comprises the following steps:
acquiring pose information of a target object;
forming a virtual barrier around said target object; the virtual barrier has the same posture as the target object and wraps the target object;
identifying physical obstacles within a planning map; and
and planning a pose following path of the foot robot in the planning map according to the pose information, the virtual barrier and the entity barrier.
A legged robot, comprising:
the pose acquisition module is used for acquiring pose information of the target object;
the virtual obstacle forming module is used for forming a virtual obstacle around the target object; the virtual barrier has the same posture as the target object and wraps the target object;
an obstacle identification module for identifying physical obstacles within a planning map; and
and the path planning module is used for planning a pose following path of the legged robot in the planning map according to the pose information, the virtual barrier and the entity barrier.
According to the following path planning method and the robot, the virtual barrier is arranged near the target object, so that the foot type robot can be ensured to keep the same posture as the target object when approaching the target object, and the smoothness of the posture following path is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a block schematic diagram of a legged robot of the present invention.
Fig. 2 is a perspective view of the legged robot of the present invention.
FIG. 3 is a block diagram of the memory cell of FIG. 1.
Fig. 4 is a flowchart of a following path planning method of the legged robot according to the present invention.
Fig. 5 is a detailed flowchart of step S18 in fig. 4.
FIG. 6 is a schematic view of the pose following path of FIG. 4.
Fig. 7 is a schematic view of an inscribed circle and an circumscribed circle of the legged robot of the present invention.
Fig. 8 is a schematic diagram illustrating the expansion process of the virtual obstacle in step S15 in fig. 4.
Description of the main elements
Foot robot 100
Mechanical unit 101
Communication unit 102
Audio output unit 103
Sensing unit 105
Display unit 106
Input unit 107
Interface unit 108
Memory cell 109
Master control unit 110
Power supply 111
Display panel 1061
Touch panel 1071
Other input devices 1072
Operating system 1
Safety protection system 2
Pose acquisition module 10
State management module 20
Virtual obstacle construction module 30
Obstacle identification module 40
Path planning module 50
Data processing module 60
Update module 70
Detection module 80
Target object 200
Virtual obstacle 300
Inner boundary 301
Outer boundary 302
Side edge 304
Solid barrier 400
Inscribed circle A
Circumscribed circle B
Inscribed radius r1
Circumscribed radius R
Specifying a graphic S
Expansion radius r2
Pose following path P
Steps S10-S19
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to represent components are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, in order to implement a hardware structure diagram of a legged robot 100 according to various embodiments of the present invention, the legged robot 100 may include: a mechanical unit 101, a communication unit 102, an audio output unit 103, a sensing unit 105, an interface unit 108, a storage unit 109, a main control unit 110, and a power supply 111. The various components of the legged robot 100 may be connected in any manner, including wired or wireless connections, etc. It will be understood by those skilled in the art that the structure of the foot robot 100 shown in fig. 1 does not limit the foot robot 100, the foot robot 100 may include more or less components than those shown, some components do not necessarily belong to the essential structure of the foot robot 100, and some components may be omitted or combined as necessary within the scope of not changing the essence of the invention.
The following describes the components of the foot robot 100 in detail with reference to fig. 1:
the mechanical unit 101 is hardware of the foot robot 100. The mechanical unit 101 may include at least one driving unit, at least one power module, and a mechanical structure module. It should be noted that each component module of the mechanical unit 101 may be one or multiple, and may be provided according to specific situations, for example, the number of the leg structures is generally 4 (as shown in fig. 2), each leg structure is configured with 3 power modules, which respectively correspond to a side swing leg joint, a thigh joint, and a calf joint, and the number of the power modules corresponding to the foot robot 100 is 12. The driving unit drives the corresponding power module to work by outputting driving torque. The power modules are matched with each other to control the mechanical structure module to walk on four feet.
The communication unit 102 may be used for receiving and transmitting signals, and may also communicate with other devices through a network, for example, receive instruction information sent by a remote controller or other foot-type robot 100 to move in a specific direction at a specific speed according to a specific gait, and transmit the instruction information to the main control unit 110 for processing. The communication unit 102 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, and the like.
The audio output unit 103 may convert audio data received by the communication unit 102 or stored in the storage unit 109 into an audio signal and output as sound. The audio output unit 103 may include a speaker, a buzzer, and the like.
The sensing unit 105 is configured to acquire information data of an environment around the foot robot 100, monitor motion parameters of each component inside the foot robot 100, and send the motion parameters to the main control unit 110. The sensing unit 105 includes various sensors, such as a sensor for acquiring ambient environment information: lidar (for long range object detection, distance determination, and/or velocity determination), millimeter wave radar (for short range object detection, distance determination, and/or velocity determination), cameras, infrared cameras, Global Navigation Satellite System (GNSS), and the like. As sensors monitoring the various components inside the legged robot 100: an Inertial Measurement Unit (IMU) for measuring values of velocity, acceleration and angular velocity, sole sensors for monitoring sole impact point position, sole attitude, ground contact force magnitude and direction, and temperature sensors for detecting component temperatures, but is not limited thereto. As for the other sensors such as the load sensor, the touch sensor, the motor angle sensor, and the torque sensor, which can be configured in the legged robot 100, the detailed description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 107 may be used to receive input numeric or character information. Specifically, the user input unit 107 may include a touch panel 1071. The touch panel 1071, also referred to as a touch screen, may collect a user's touch operations (such as operations of the user on the touch panel 1071 or near the touch panel 1071 using a palm, a finger, or a suitable accessory) and drive a corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the main control unit 110, and can receive and execute commands sent by the main control unit 110. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, the other input devices 1072 may include, but are not limited to, one or more of a remote control handle or the like, and are not limited thereto.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation on or near the touch panel 1071, the touch panel transmits the touch operation to the main control unit 110 to determine the type of the touch event, and then the main control unit 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement input and output functions, respectively, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement input and output functions, which is not limited herein.
The interface unit 108 may be used to receive inputs from external devices (e.g., data information, power, etc.) and transmit the received inputs to one or more components within the foot robot 100, or may be used to output to external devices (e.g., data information, power, etc.). The interface unit 108 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 109 is used to store software programs and various data. The storage unit 109 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the legged robot 100 in use (such as various sensing data acquired by the sensing unit 105, log file data), and the like. In addition, the storage unit 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as a disk memory, a flash memory, or other volatile solid-state memory.
The main control unit 110 is a control center of the legged robot 100, connects various components of the entire legged robot 100 by using various interfaces and lines, and performs overall control of the legged robot 100 by operating or executing software programs stored in the storage unit 109 and calling data stored in the storage unit 109.
The power supply 111 is used for supplying power to various components, and the power supply 111 comprises a battery and a power supply control board. The power control board is used for controlling the functions of battery charging, battery discharging, power consumption management and the like. Alternatively, the power source 111 may be electrically connected to the main control unit 110, the driving unit 1011, and the sensing unit 105 (such as a camera, a radar, a sound box, etc.), respectively, and it should be noted that each component may be connected to a different power source 111, or powered by the same power source 111.
In addition, please refer to fig. 3, which is a block diagram of the memory cell 109. In an embodiment of the present invention, the storage unit 109 includes:
the pose acquisition module 10 is configured to acquire pose information of a target object 200 (shown in fig. 6).
In at least one embodiment of the present invention, the pose information may include, but is not limited to, position coordinates and a pose angle of the target object 200, and the like. The pose acquisition module 10 acquires the pose information of the target object 200 through the sensing unit 105.
And the state management module 20 is configured to identify a scene where the target object 200 is located and set a planning map according to the scene.
In at least one embodiment of the present invention, the scenes may include indoor scenes as well as outdoor scenes. Wherein the indoor scene is generally a flat road surface and the outdoor scene is generally a rugged road surface. The state management module 20 recognizes the road surface condition of the target object 200 through the sensing unit 105. When the recognized road surface condition is a flat road surface, the state management module 20 recognizes that the target object 200 is in an indoor scene. When the road surface condition is recognized as a rough road surface, the state management module 20 recognizes that the target object 200 is in an outdoor scene.
Different ones of the scenarios correspond to different types of the planning maps. In at least one embodiment of the present invention, the plan map may include a 2D grid map and a 3D high level diagram. When the target object 200 is in the indoor scene, the state management module 20 sets a 2D grid map as the planning map. When the target object 200 is in the outdoor scene, the state management module 20 sets a 3D high level diagram as the planning map. Wherein, the 3D elevation map is established according to the perception information acquired by the sensing unit 105. When the plan map is a 3D high level map, the state management module 20 further converts the 3D high level map into a 2.5D map as the plan map.
In at least one embodiment of the present invention, the planning map is a local map, a square range centered on the target object 200. Wherein the planning map has a range of 20 meters by 20 meters. In at least one embodiment of the invention, the resolution of the planning map is 5 centimeters.
The virtual obstacle constructing module 30 is configured to form a virtual obstacle 300 (shown in fig. 6) around the target object 200.
In at least one embodiment of the present invention, the virtual obstacle 300 is configured to maintain the same posture of the legged robot 100 and the target object 200 when the legged robot 100 travels to a preset range of the target object 200.
The virtual obstacle 300 has the same posture as the target object 200, and encloses the target object 200. In at least one embodiment of the present invention, the virtual obstacle 300 has a substantially U-shaped configuration. That is, the opening direction of the virtual obstacle 300 is opposite to the orientation of the target object 200. In at least one embodiment of the present invention, as shown in fig. 7, it is a projection of the legged robot 100 on the bearing surface. The bearing surface is a projection plane of the foot robot 100 when the foot robot 100 is seen from a top view angle. The foot robot 100 has an inscribed circle a and an circumscribed circle B. Wherein the inscribed circle A has an inscribed radius R1, and the circumscribed circle B has a circumscribed radius R. A distance W (shown in fig. 8) between the inner boundaries 301 of the virtual obstacle 300 is greater than twice the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100, and is smaller than a preset value. In at least one embodiment of the present invention, the preset value is three times of the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100.
Specifically, the virtual obstacle construction module 30 constructs the virtual obstacle 300 of a specified pattern with the target object 200 as a center.
An obstacle identification module 40 for identifying a physical obstacle 400 (shown in fig. 6) within the map.
In at least one embodiment of the present invention, the obstacle identification module 40 may identify a physical obstacle 400 (shown in fig. 6) located within the planning map through the sensing unit 105.
A path planning module 50, configured to plan a pose following path P (shown in fig. 6) of the legged robot 100 in the planning map according to the pose information, the virtual obstacle 300, and the physical obstacle 400. In other embodiments, the pose tracking path P is a global path. In the process of traveling along the pose following path P, the legged robot 100 may further obtain a local map near the legged robot 100, update the current pose of the target object 200 and the information of the physical obstacle 400, and adjust the pose following path P located in the local map according to the above contents, so as to implement timely dynamic obstacle avoidance and motion control of the legged robot 100, thereby implementing autonomous motion of the legged robot.
In at least one embodiment of the present invention, the route planning module 50 performs planning using Hybrid a-x algorithm and a freouard route smoothing algorithm, and collectively identifies the virtual obstacle 300 and the physical obstacle 400 as obstacles. In addition, the pose following path P has no kinematic constraint, and can satisfy the mobility of the legged robot 100, such as pivot turning.
The path planning module 50 is further configured to send the pose following path P to the mechanical unit 101 through the communication unit 102 to drive the foot robot 100 to travel along the pose following path P.
The data processing module 60 performs dilation processing on the boundary of the virtual obstacle 300 and the physical obstacle 400. As shown in fig. 8, the virtual obstacle 300 includes an inner boundary 301 and an outer boundary 302. The inner boundary 301 includes a plurality of sides 304. In the expansion process, the virtual obstacle configuring module 30 expands from the inner boundaries 301 of the two opposite sides 304 of the virtual obstacle 300 to the direction away from the virtual obstacle 300 by a specified expansion radius r2 to form a plurality of specified patterns S arranged in a connected manner. In at least one embodiment of the present invention, the designated figure S is a semicircular arc. The designated expansion radius r2 is greater than or equal to the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100 and less than half of the distance between the inner boundaries 301 of the virtual obstacle 300, so that collision between the virtual obstacle 300 and the legged robot 100 is avoided and the legged robot 100 can pass through. In other embodiments, the virtual obstacle constructing module 30 may also perform the dilation process on the inner boundary 301 and the outer boundary 302 of all the sides 304 of the virtual obstacle 300 at the same time. The inflation process of the path planning module 50 for the physical barrier 400 is similar to the inflation process for the virtual barrier 300.
An updating module 70 for updating the position of the legged robot 100 on the planning map.
The detecting module 80 is configured to determine whether the legged robot 100 generates a stop instruction. When the legged robot 100 generates a stop command, the legged robot 100 stops driving the mechanical unit 101.
Specifically, the detection module 80 determines whether the legged robot 100 reaches a human pose point. When the legged robot 100 does not reach the human body pose point, the detection module 80 further determines whether the legged robot 100 receives a stop signal, which includes but is not limited to a stop gesture, a voice stop command, and the like. And when the foot type robot 100 reaches the human body pose point or the foot type robot 100 receives a stop signal, identifying that the foot type robot 100 generates the stop instruction, otherwise, identifying that the foot type robot 100 does not generate the stop instruction.
In other embodiments, the detection module 80 may first determine whether the legged robot 100 receives a stop signal. When the legged robot 100 does not receive the stop signal, the detection module 80 further determines whether the legged robot 100 reaches a human body pose point. And when the foot robot 100 reaches the human body pose point or the foot robot 100 receives a stop signal, identifying that the foot robot 100 generates the stop instruction. And when the foot robot 100 does not reach the human body pose point, identifying that the foot robot 100 does not generate the stop instruction.
In at least one embodiment of the present invention, the human body pose point and the target object 200 have the same pose and a straight-line distance from the legged robot 100 is equal to a preset distance. Wherein the preset distance is a safety distance between the legged robot 100 and the target object 200, which ensures that the legged robot 100 approaches the target object 200 and avoids collision between the legged robot 100 and the target object 200.
In at least one embodiment of the present invention, the stop gesture is a gesture formed by the target object 200 recognized by the foot robot 100 through the sensing unit 105. The gesture may be set according to the habit of the user, for example, when the target object 200 is a human body, the gesture may be, but is not limited to, opening a palm, making a fist with the palm, and the like.
In another embodiment, when the stop signal is a voice stop signal, the voice stop signal recognized by the foot robot 100 through the sensing unit 105; the stop signal may also be a signal sent by APP interaction, and the like, but is not limited thereto.
In the legged robot 100, the virtual obstacle is disposed near the target object 200, so that the legged robot 100 can keep the same posture as the target object 200 when traveling to the preset range of the target object 200, and the smoothness of the posture following path P is optimized.
Please refer to fig. 4, which is a flowchart of a following path planning method of the legged robot 100. In at least one embodiment of the present invention, the following path planning method is applied to the legged robot 100. The legged robot 100 may also include more or less hardware or software than shown in fig. 1 or 3, or a different arrangement of components. The following path planning method comprises the following steps:
s10, the pose acquisition module 10 acquires pose information of the target object 200 (shown in fig. 6).
In at least one embodiment of the present invention, the pose information may include, but is not limited to, position coordinates and a pose angle of the target object 200, and the like. The pose acquisition module 10 acquires the pose information of the target object 200 through the sensing unit 105.
S11, the status management module 20 identifies the scene of the target object 200 and sets a planning map according to the scene.
In at least one embodiment of the present invention, the scenes may include indoor scenes as well as outdoor scenes. Wherein the indoor scene is generally a flat road surface and the outdoor scene is generally a rugged road surface. The state management module 20 recognizes the road surface condition of the target object 200 through the sensing unit 105. When the recognized road surface condition is a flat road surface, the state management module 20 recognizes that the target object 200 is in an indoor scene. When the road surface condition is recognized as a rough road surface, the state management module 20 recognizes that the target object 200 is in an outdoor scene.
Different ones of the scenarios correspond to different types of the planning maps. In at least one embodiment of the present invention, the plan map may include a 2D grid map and a 3D high level diagram. When the target object 200 is in the indoor scene, the state management module 20 sets a 2D grid map as the planning map. When the target object 200 is in the outdoor scene, the state management module 20 sets a 3D high level diagram as the planning map. Wherein, the 3D elevation map is established according to the perception information acquired by the sensing unit 105. When the plan map is a 3D high level map, the state management module 20 further converts the 3D high level map into a 2.5D map as the plan map.
In at least one embodiment of the present invention, the planning map is a local map, a square range centered on the target object 200. Wherein the planning map has a range of 20 meters by 20 meters. In at least one embodiment of the invention, the resolution of the planning map is 5 centimeters.
S12, the virtual obstacle constructing module 30 forms a virtual obstacle 300 (as shown in fig. 6) around the target object 200.
In at least one embodiment of the present invention, the virtual obstacle 300 is configured to maintain the same posture as the target object 200 when the legged robot 100 travels within a preset range of the target object 200.
Referring to fig. 6, the virtual obstacle 300 has the same posture as the target object 200 and encloses the target object 200. In at least one embodiment of the present invention, the virtual obstacle 300 has a substantially U-shaped configuration. That is, the opening direction of the virtual obstacle 300 is opposite to the orientation of the target object 200. As shown in fig. 7, which is a projection of the legged robot 100 on the carrying surface. The bearing surface is a projection plane of the foot robot 100 when the foot robot 100 is seen from a top view angle. The foot robot 100 has an inscribed circle a and an circumscribed circle B. Wherein the inscribed circle A has an inscribed radius R1, and the circumscribed circle B has a circumscribed radius R. A distance W (shown in fig. 8) between the inner boundaries 301 of the virtual obstacle 300 is greater than twice the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100, and is smaller than a preset value. In at least one embodiment of the present invention, the preset value is three times of the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100.
In at least one embodiment of the present invention, the virtual obstacle composing module 30 constructs the virtual obstacle 300 of a specified pattern with the target object 200 as a center.
S13, the obstacle identification module 40 identifies a physical obstacle 400 (shown in fig. 6) within the planning map.
In at least one embodiment of the present invention, the obstacle identification module 40 may identify the physical obstacle 400 located within the planning map through the sensing unit 105.
S14, the path planning module 50 plans the pose of the legged robot 100 within the planning map to follow a path P (as shown in fig. 6) according to the pose information, the virtual obstacle 300 and the physical obstacle 400. In other embodiments, the pose tracking path P is a global path. In the process of traveling along the pose following path P, the legged robot 100 may further obtain a local map near the legged robot 100, update the current pose of the target object 200 and the information of the physical obstacle 400, and adjust the pose following path P located in the local map according to the above contents, so as to implement timely dynamic obstacle avoidance and motion control of the legged robot 100, thereby implementing autonomous motion of the legged robot.
In at least one embodiment of the present invention, the path planning module 50 performs planning using Hybrid a-x algorithm and collectively identifies the virtual obstacle 300 and the physical obstacle 400 as obstacles. In addition, the pose following path P has no kinematic constraint, and can satisfy the mobility of the legged robot 100, such as pivot turning.
S15, the data processing module 60 performs dilation process on the boundary between the virtual obstacle 300 and the physical obstacle 400.
As shown in fig. 8, the virtual obstacle 300 includes an inner boundary 301 and an outer boundary 302. The inner boundary 301 includes a plurality of sides 304. In the expansion process, the virtual obstacle configuring module 30 expands from the inner boundaries 301 of the two opposite sides 304 of the virtual obstacle 300 to the direction away from the virtual obstacle 300 by a specified expansion radius r2 to form a plurality of specified patterns S arranged in a connected manner. In at least one embodiment of the present invention, the designated figure S is a semicircular arc. The designated expansion radius r2 is equal to or greater than the inscribed radius r1 of the inscribed circle a corresponding to the legged robot 100 and less than half of the distance between the inner boundaries 301 of the virtual obstacle 300, so that collision between the virtual obstacle 300 and the legged robot 100 is avoided and the legged robot 100 can pass through. In other embodiments, the virtual obstacle constructing module 30 may also perform the dilation process on the inner boundary 301 and the outer boundary 302 of all the sides 304 of the virtual obstacle 300 at the same time. The inflation process of the path planning module 50 for the physical barrier 400 is similar to the inflation process for the virtual barrier 300.
S16, the path planning module 50 sends the pose following path P to the mechanical unit 101 through the communication unit 102 to drive the legged robot 100 to travel along the pose following path P.
S17, the updating module 70 updates the position of the legged robot 100 on the planning map.
S18, the detecting module 80 determines whether the legged robot 100 generates a stop command.
Referring to fig. 5, in at least one embodiment of the present invention, the step of determining whether the legged robot 100 generates the stop command by the detection module 80 includes:
s181, judging whether the foot robot 100 reaches a human body pose point;
and S182, judging whether the foot robot 100 receives a stop signal or not when the foot robot 100 does not reach the human body pose point.
When the legged robot 100 reaches the human body pose point or the legged robot 100 receives a stop signal, recognizing that the legged robot 100 generates the stop instruction, and proceeding to step S19;
when the legged robot 100 does not reach the human body pose point and the legged robot 100 does not receive the stop signal, it is recognized that the legged robot 100 does not generate the stop instruction, and the process returns to step S10.
In at least one embodiment of the present invention, the human body pose point is a pose point having the same pose as the target object 200 and having a straight-line distance from the legged robot 100 equal to a preset distance. Wherein the preset distance is a safety distance between the legged robot 100 and the target object 200, which ensures that the legged robot 100 approaches the target object 200 and avoids collision between the legged robot 100 and the target object 200.
In at least one embodiment of the present invention, the stop signal is a signal including, but not limited to, a stop gesture, a voice stop command, and the like. When the stop signal is a stop gesture, the legged robot 100 recognizes a gesture formed by the target object 200 through the sensing unit 105. The gesture may be set according to the habit of the user, for example, when the target object 200 is a human body, the gesture may be, but is not limited to, opening a palm, making a fist with the palm, and the like. When the stop signal is a voice stop signal, the legged robot 100 recognizes the voice stop signal through the sensing unit 105; the stop signal may also be a signal sent by APP interaction, and the like, but is not limited thereto.
In other embodiments, the detection module 80 may first determine whether the legged robot 100 receives a stop signal. When the legged robot 100 does not receive the stop signal, the detection module 80 further determines whether the legged robot 100 reaches a human body pose point. And when the foot type robot 100 reaches the human body pose point or the foot type robot 100 receives a stop signal, identifying that the foot type robot 100 generates the stop instruction, otherwise, identifying that the foot type robot 100 does not generate the stop instruction.
S19, when the legged robot 100 generates a stop command, the legged robot 100 stops driving the mechanical unit 101.
In the following path planning method, the virtual obstacle is arranged near the target object 200, so that the same posture as that of the target object 200 can be kept when the legged robot 100 moves to the preset range of the target object 200, and the smoothness of the posture following path P is optimized.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A follow path planning method for a legged robot for following a target object; the method is characterized in that: the following path planning method comprises the following steps:
acquiring pose information of a target object;
forming a virtual barrier around said target object; the virtual barrier has the same posture as the target object and wraps the target object;
identifying physical obstacles within a planning map; and
and planning a pose following path of the foot robot in the planning map according to the pose information, the virtual barrier and the entity barrier.
2. A method for planning a follow path according to claim 1 wherein said virtual barrier is generally U-shaped and has an inner boundary and an outer boundary; the opening direction of the virtual obstacle is opposite to the orientation of the target object; the projection of the foot type robot on the bearing surface is provided with an inscribed circle; the inscribed circle has an inscribed radius; the distance between the inner boundaries of the two opposite sides of the virtual obstacle is greater than two times of the inscribed radius of the inscribed circle corresponding to the legged robot and is smaller than a preset distance.
3. The follow-path planning method according to claim 1, wherein before planning the pose follow-path of the legged robot within the planning map according to the pose information, the virtual obstacle, and the physical obstacle, the follow-path planning method further comprises:
and performing expansion processing on the boundary of the virtual obstacle and the solid obstacle.
4. The follow path planning method according to claim 1, further comprising:
sending the pose following path to a mechanical unit through a communication unit so as to drive the foot robot to travel along the pose following path;
updating the position of the legged robot on the planning map;
judging whether the foot type robot generates a stop instruction or not;
when the foot robot generates a stop instruction, the foot robot stops driving the mechanical unit.
5. The follow path planning method according to claim 4, wherein the step of determining whether the legged robot generates a stop command comprises:
and identifying that the foot type robot generates the stop instruction when the foot type robot reaches a human body pose point or the foot type robot receives a stop signal.
6. A legged robot, characterized in that it comprises:
the pose acquisition module is used for acquiring pose information of the target object;
the virtual obstacle forming module is used for forming a virtual obstacle around the target object; the virtual barrier has the same posture as the target object and wraps the target object;
an obstacle identification module for identifying physical obstacles within a planning map; and
and the path planning module is used for planning a pose following path of the legged robot in the planning map according to the pose information, the virtual barrier and the entity barrier.
7. The legged robot according to claim 6, wherein said virtual barrier is generally U-shaped in configuration and has an inner boundary and an outer boundary; the opening direction of the virtual obstacle is opposite to the orientation of the target object; the projection of the foot type robot on the bearing surface is provided with an inscribed circle; the inscribed circle has an inscribed radius; the distance between the inner boundaries of the two opposite sides of the virtual obstacle is greater than two times of the inscribed radius of the inscribed circle corresponding to the legged robot and is smaller than a preset distance.
8. The legged robot according to claim 6, further comprising a data processing module that dilates boundaries of virtual and physical obstacles before planning a pose following path of the legged robot within the planning map according to the pose information, the virtual obstacles, and the physical obstacles.
9. The legged robot according to claim 6, further comprising:
the path planning module is further used for sending the pose following path to a mechanical unit through a communication unit so as to drive the foot type robot to travel along the pose following path;
an update module for updating the position of the legged robot on the planning map;
the detection module is used for judging whether the foot type robot generates a stop instruction or not;
when the foot robot generates a stop instruction, the foot robot stops driving the mechanical unit.
10. The legged robot according to claim 9, wherein the detection module identifies that the legged robot generates the stop instruction when the legged robot reaches a human pose point or the legged robot receives a stop signal.
CN202111642217.9A 2021-12-29 2021-12-29 Following path planning method and foot type robot Pending CN114326736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111642217.9A CN114326736A (en) 2021-12-29 2021-12-29 Following path planning method and foot type robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111642217.9A CN114326736A (en) 2021-12-29 2021-12-29 Following path planning method and foot type robot

Publications (1)

Publication Number Publication Date
CN114326736A true CN114326736A (en) 2022-04-12

Family

ID=81017419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111642217.9A Pending CN114326736A (en) 2021-12-29 2021-12-29 Following path planning method and foot type robot

Country Status (1)

Country Link
CN (1) CN114326736A (en)

Similar Documents

Publication Publication Date Title
EP3525992B1 (en) Mobile robot and robotic system comprising a server and the robot
US10328575B2 (en) Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot
Cruz et al. Decentralized cooperative control-a multivehicle platform for research in networked embedded systems
US7386163B2 (en) Obstacle recognition apparatus and method, obstacle recognition program, and mobile robot apparatus
US10948907B2 (en) Self-driving mobile robots using human-robot interactions
US11740624B2 (en) Advanced control system with multiple control paradigms
WO2004052597A1 (en) Robot control device, robot control method, and robot control program
CN114564027A (en) Path planning method of foot type robot, electronic equipment and readable storage medium
US11635759B2 (en) Method of moving robot in administrator mode and robot of implementing method
CN114800535B (en) Robot control method, mechanical arm control method, robot and control terminal
KR102163462B1 (en) Path-finding Robot and Mapping Method Using It
CN114740835A (en) Path planning method, path planning device, robot, and storage medium
CN114510041A (en) Robot motion path planning method and robot
US20180329424A1 (en) Portable mobile robot and operation thereof
CN116358522A (en) Local map generation method and device, robot, and computer-readable storage medium
CN114326736A (en) Following path planning method and foot type robot
JP7075935B2 (en) Autonomous robot system
CN110793532A (en) Path navigation method, device and computer readable storage medium
Dong et al. Design of ackerman mobile robot system based on ROS and Lidar
CN114137992A (en) Method and related device for reducing shaking of foot type robot
WO2020039656A1 (en) Self-propelled device, and travel control method and travel control program for self-propelled device
CN114872051B (en) Traffic map acquisition system, method, robot and computer readable storage medium
CN116974288B (en) Robot control method and robot
CN115790606B (en) Track prediction method, device, robot and storage medium
CN114633826B (en) Leg collision processing method for foot type robot and foot type robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination